Tag: developmental design

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

complexityevaluationsystems thinking

Diversity / Complexity in Focus

5901575754_a7ca159cd4_b

Diversity in focus?

 

As cities and regions worldwide celebrate Pride the role of diversity, understanding and unity has been brought to mind just as it has contemplating the British public’s choice to leave the EU with Brexit. Both events offer lessons in dealing with complexity and why diversity isn’t about either/or, but more both and neither and that we might learn something not just from the English, but their gardens, too. 

It’s a tad ironic that London has been celebrating its Pride festival this past week, a time when respect and celebration of diversity as well as unification of humanity was top-of-mind while the country voted to undo many of the policies that fostered political and economic union and could likely reduce cultural diversity with Europe. But these kind of ironies are not quirky, but real manifestations of what can happen when we reduce complexity into binaries.

This kind of simplistic, reductionist thinking approach can have enormously harmful and disrupting effects that ripple throughout a system as we are seeing with what’s happened so far in the United Kingdom, Europe and the world in the past week.

Complexity abhors dichotomies

There are two kinds of people in the world: those who believe there are two kinds of people and those who don’t

The above quote (which has many variations, including one attributed to author Tom Robbins that I like) makes light of the problem of lumping the complex mass of humanity into two simple categories. It abstracts variation to such a level that it becomes nearly meaningless. The Brexit vote is similar. Both are lessons in complexity lived in the world because they reflect a nuanced, mutli-faceted set of issues that are reduced into binary options that are clustered together.

It is no surprise that, in the days following the Brexit vote in the UK, that there is much talk of a divided, rather than a united kingdom.

Diversity is difficult to deal with and is often left unaddressed as a result. The benefits to having diversity expressed and channeled within a complex system are many and articulated in research and practice contexts across sectors and include protection from disruption, better quality information, a richer array of strategic options and, in social contexts, a more inclusive social community.

The risks are many, too, but different in their nature. Diversity can produce tension which can be used for creative purposes, liberation, insight as well as confusion and conflict, simultaneously. This as a lot do with humans uneasy relationship with change. For some, change is easier to deal with by avoiding it — which is what many in the Leave camp thought they could do by voting the way they did. The darker side of the Leave campaign featured change as an image of non-white immigrant/refugees flooding into Britain, presumably to stoke those uncomfortable with (or outwardly hostile) to others to fear the change that could come from staying in the European Union.

Staying the same requires change

The author Guiseppe de Lampedussa once wrote about the need to change even when desiring to keep things as they are, because even if we seek stability, everything around us is changing and thus the system (or systems) we are embedded in are in flux. That need to change to stay the same was something that many UK citizens voiced. What was to change and what was to stay the same was not something that could be captured by a “Leave” or “Remain” statement, yet that is what they were given.

It should come to no surprise that, when presented with a stark choice on a complex matter, that there would be deep dissatisfaction with the result no matter what happened. We are seeing the fallout from the vote in the myriad factions and splintering of both of the main political parties — Conservative and Labour — and a House of Commons that is now filled with rebellion. Is the UK better off? So far, no way.

This is not necessarily because of the Leave vote, but because of what has come from the entire process of mis-handling the campaigns and the lack of plan for moving forward (by both camps). Further complicating matters is that the very EU that Britain has voted to leave is now not the same place as it was when the Brexit vote was held just five days ago. It’s also faced with rising voices for reform and potential separation votes from other member states who saw their causes bolstered or hindered because of the UK referendum. This is complexity in action.

Tending the garden of complex systems

The English know more about complexity than they might realize. An English garden is an example of complexity in action and how it relates to the balance of order, disorder and unordered systems. A look at a typical English garden will find areas of managed beauty, wildness, and chaos all within metres of one another.

9556155847_5eaf2a7eee_h

What also makes a garden work is that it requires the right balance of effort, adaptive action, planning and compensating as well as the ability to let go all at the same time. Gardening requires constant attention to the weather, seasons, the mix of diversity within the system, the boundaries of the system itself (lest weeds or other species seek to invade from outside the garden or plants migrate out to a neighbours place) and how one works with all of it in real time.

Social systems are the same way. They need to be paid attention to and acted upon strategically, in their own time and way. This is why annual strategic planning retreats can be so poorly received. We take an organization with all it’s complexity and decide that once per year we’ll sit down and reflect on things and plan for the future. Complexity-informed planning requires a level of organizational mindfulness that engages the planning process dynamically and may involve the kind of full-scale, organization-wide strategy sessions more frequently or with specific groups than is normally done. Rather than use what is really arbitrary timelines — seen in annual retreats, 5-year plans and so forth — the organization takes a developmental approach, like a gardener, and tends to the organizations’ strategic needs in rhythms that fit the ecosystem in which it finds itself.

This kind of work requires: 1) viewing yourself as part of a system, 2) engaging in regular, sustained planning efforts that have 3) alignment with a developmental evaluation process that continually monitors and engages data collection to support strategic decision-making as part of 4) a structured, regular process of sensemaking so that an organization can see what is happening and make sense of it in real-time, not retrospectively because one can only act in the present, not the future or past.

Just as a garden doesn’t reduce complexity by either being on or off, neither should our social or political systems. Until we start realizing this and acting on it — by design — at the senior strategic level of an organization, community or nation, we may see Brexit-like conditions fostered in places well beyond the white cliffs of Dover into governments and organizations globally.

Photo Credits: The London Eye Lit Up for Pride London by David Jones and Hidcote Manor GardenHidcote Manor GardenHidcote Manor Garden by David Catchpole both used under Creative Commons License via Flickr. Thanks to the Davids for sharing their work.

complexitydesign thinkingsocial systemssystems thinking

(Re) Making our World

24600489394_c8b9c405ca_k

Britain’s vote to leave the European Union is a nod to the future and the past and both perspectives illustrate how citizens everywhere are struggling with how to best make their world – and to what extent that’s possible. These are design choices with systems implications that will be felt far beyond those who are making such decisions.

We only understand systems from a perspective, because where you sit within a system determines the relevance of properties that are of that system. These properties look different (or may be wholly imperceptible) depending on the vantage point taken within that system. This is what makes understanding and working with systems so challenging.

This morning the world woke up to find that the British people voted to leave the European Union. For the Brits this was a choice about where they wanted to place a boundary around certain systems (political, economic, geographic) and how they perceived having control over what took place within, around (and indeed the very nature of) those boundaries. Boundaries and constraints are principally what defines a system as that is what shapes what happens inside that system. For Britain it was a choice to redefine those boundaries tighter with a hope that it will bring greater good to that nation.

Boundary critique in complex systems

There is a guide for systems thinking that says if you’re trying to understand a system and find yourself lost you’ve probably bounded your systems too loosely and if you’re finding yourself constantly seeking explanations for what happens in a system that occur outside those boundaries than you’ve bound it too tightly.

The choice of millions of Britons about their own country, their boundaries, has influenced the world as stock markets shake up, currencies are devalued and entire economies rattlednot just now, but potentially for a period to come. Oil prices have fallen sharply in the wake of the decision, which will impact every part of the economy and further delay any shift away from carbon-based fuel options. The European Union and the entire world is feeling the effect from 51% of Britons who voted (of about 17M citizens) deciding they would be better off outside the EU than within it. Consider how a small number of people can have such an enormous impact — a perfect illustration of complexity in action.

And as England seeks to re-draw its boundary, already there is discussion of another Scottish independence vote in the wake of this, which may re-draw the boundary further. These votes are intentional acts and perhaps the most straightforward expressions of intention and self-determination within a democracy, but their impact and outcomes on citizens and the world around them are far from straightforward making such direct-democracy far more problematic than those in support of such votes make out. This is not to say that such votes are necessarily good or bad, but they are certainly not simple.

Co-design and its problems

The Brexit vote invites memory of a quote from one of Britain’s famous leaders who famously quipped that democracy is the worst form of government except for all the others. Democracy creates the opportunity for co-design of our political systems, policy choices and boundaries. By having an opportunity to voice an opinion and engage in the act of voting we citizens have a role to play in co-designing what we want from our country. The downside is that we are engaging in this exercise from where we sit in the system, thus the design I want might not be the same as someone else in my country, nor may we see the same information the same way or even consider the same information relevant.

It’s for this reason that we’re seeing strange things in politics these days. The volume of information available to us and the complexity of the layered contexts in which that information applies makes a simple decision like a vote for or against something far more challenging. Complexity is created by volatility, lack of certainty, an absence of predictability and dynamism. Co-design introduces all of this and, on a national scale, amplifies the impact of that complexity.

This is a massive challenge for everyone, but in particular those who come from the design and systems science realms. For design, co-design has been touted as a desired, if not idealized, principle for guiding the making of everything from learning experiences to services to products to policies. In systems thinking and related sciences, too often the focus is less on what is created, but how it impacts things — offering more description and analytical insight than guidance on what ought to be developed and how. Bringing these two worlds together — systemic design — may have never been more important.

Systemic design, boredom and critical making

Roseanne Somerson, President of the Rhode Island School of Design, recently wrote about the importance of boredom in spurring creativity in design. In it she speaks of the term ‘critical making’ instead of using design thinking. I love that term. It does a better job of reflecting the thinking-in-action praxis that is really at the heart of good design. In this article she refers to the insights and bursts of creativity that come from her students when she allows them — rather, forces them — to be bored.

What boredom can do is prompt a form of mindfulness, an emptying of the thoughts allowing the opportunity to escape the rush of stimulation that we get from the world and permit new insights to come in. It is a way of temporarily freeing oneself from the path dependence that is created by an entrained thought pattern that seeks out certain stimulation (which is why we tend to re-think the same thing over and again). This is an enormously useful approach for supporting organizations and individuals operating in complex systems to see things differently and not to get swept up in the power of a prevailing current without being fully aware that such a current exists and evaluating whether that is useful or not useful.

Creating the space to be mindful and to understand better ones place in their system as well as the potential consequences of change within that system is one of the key contributions that systemic design can offer. It is about engaging in social critical making and perhaps, it may be away out of the trap of creating simple binaries of stay versus leave or yes vs no. Surely no Britons thought that membership in the EU was all bad or good, but to divide the choice to be in or out might have been a case of taking a simple approach to a complex problem and now we will see how a simple choice has complex reverberations throughout the system now and into the future. Time will tell whether these — and the resulting choices of other nations and regions — will bring us closer together, further apart, or something else entirely.

27595453030_84dde4bc77_b

Photo credits: Brexit Scrabble by Jeff Djevdet and Sad Day #brexit from Jose Manuel Mota both used under Creative Commons License via Flickr. Thanks Jeff and Jose for sharing your art.

education & learningevaluation

Reflections said, not done

27189389172_cb3f195829_k

 

Reflective practice is the cornerstone of developmental evaluation and organizational learning and yet is one of the least discussed (and poorly supported) aspects of these processes. It’s time to reflect a little on reflection itself. 

The term reflective practice was popularized by the work of Donald Schön in his book The Reflective Practitioner, although the concept of reflecting while doing things was discussed by Aristotle and serves as the foundation for what we now call praxis. Nonetheless, what made reflective practice as a formal term different from others was that it spoke to a deliberative process of reflection that was designed to meet specific developmental goals and capacities. While many professionals had been doing this, Schön created a framework for understanding how it could be done — and why it was important — in professional settings as a matter of enhancing learning and improving innovation potential.

From individual learners to learning organizations

As the book title suggests, the focus of Schön’s work was on the practitioner her/himself. By cultivating a focus, a mindset and a skill set in looking at practice-in-context Schön (and those that have built on his work) suggest that professionals can enhance their capacity to perform and learning as they go through a series of habits and regular practices by critically inquiring about their work as they work.

This approach has many similarities to mindfulness in action or organizational mindfulness, contemplative inquiry, and engaged scholarship among others. But, aside from organizational mindfulness, these aforementioned approaches are designed principally to support individuals learning about and reflecting about their work.

There’s little question that paying attention and reflecting on what is being done has value for someone seeking to improve the quality of their work and its potential impact, but it’s not enough, at least in practice (even if it does in theory). And the evidence can be found in the astonishing absence of examples of sustained change initiatives supported by reflective practice and, more particularly, developmental evaluation, which is an approach for bringing reflection to bear on the way we evolve programs over time. This is not a criticism of reflective practice or developmental evaluation per se, but the problems that many have in implementing it in a sustained manner. From professional experience, this comes down largely to the matter of what is required to actually do reflective practice or any in practice. 

For developmental evaluation it means connecting what it can do to what people actually will do.

Same theories, different practices

The flaw in all of this is that the implementation of developmental evaluation is often predicated on implicit assumptions about learning, how it’s done, who’s responsible for it, and what it’s intended to achieve. The review of the founding works of developmental evaluation (DE) by Patton and others point to practices and questions that that can support DE work.

While enormously useful, they make the (reasonable) assumption that organizations are in a position to adopt them. What is worth considering for any organization looking to build DE into their work is: are we really ready to reflect in action? Do we do it now? And if we don’t, what makes us think we’ll do it in the future? 

In my practice, I continually meet organizations that want to use DE, be innovative, become adaptive, learn more deeply from what they do and yet when we speak about what they currently do to support this in everyday practice few examples are presented. The reason is largely due to time and the priorities and organization of our practice in relation to time. Time — and its felt sense of scarcity for many of us — is one of the substantive limits and reflective practice requires time.

The other is space. Are there places for reflection on issues that matter that are accessible? These twin examples have been touched on in other posts, but they speak to the limits of DE in affecting change without the ability to build reflection into practice. Thus, the theory of DE is sound, but the practice of it is tied to the ability to use time and space to support the necessary reflection and sensemaking to make it work.

The architecture of reflection

If we are to derive the benefits from DE and innovate more fully, reflective practice is critical for without one we can’t have the other. This means designing in reflective space and time into our organizations ahead of undertaking a developmental evaluation. This invites questions about where and how we work in space (physical and virtual) and how we spend our time.

To architect reflection into our practice, consider some questions or areas of focus:

  • Are there spaces for quiet contemplation free of stimulation available to you? This might mean a screen-free environment, a quiet space and one that is away from traffic.
  • Is there organizational support for ‘unplugging’ in daily practice? This would mean turning off email, phones and other electronic devices’ notifications to support focused attention on something. And, within that space, are there encouragements to use that quiet time to focus on looking at and thinking about evaluation data and reflecting on it?
  • Are there spaces and times for these practices to be shared and done collectively or in small groups?
  • If we are not granting ourselves time to do this, what are we spending the time doing and does it add more value than what we can gain from learning?
  • Sometimes off-site trips and scheduled days away from an office are helpful by giving people other spaces to reflect and work.
  • Can you (will you?) build in — structurally — to scheduled work times and flows committed times to reflect-in-action and ensure that this is done at regular intervals, not periodic ones?
  • If our current spaces are insufficient to support reflection, are we prepared to redesign them or even move?

These are starting questions and hard ones to ask, but they can mean the difference between reflection in theory and reflection in practice which is the difference between innovating, adapting and thriving in practice, not just theory or aspiration.

 

 

 

 

behaviour changebusinessinnovationknowledge translation

The hidden cost of learning & innovation

7887767996_13f27106fa_k

The costs of books, materials, tuition, or conference fees often distort the perception of how much learning costs, creating larger distortions in how we perceive knowledge to benefit us. By looking at what price we pay for integrating knowledge and experience we might re-valuate what we need, what we have and what we pay attention to in our learning and innovation quest. 

A quote paraphrased and attributed to German philosopher Arthur Schopenhauer points to one of the fundamental problems facing books:

Buying books would be a good thing if one could also buy the time to read them in: but as a rule the purchase of books is mistaken for the appropriation of their contents.

Schopenhauer passed away in 1860 when the book was the dominant media form of codified knowledge and the availability of books was limited. This was before radio, television, the Internet and the confluence of it all in today’s modern mediascape from Amazon to the iPhone and beyond.

Schopenhauer exposes the fallacy of thought that links having access to information to knowledge. This fallacy underpins the major challenges facing our learning culture today: quantity of information vs quality of integration.

Learning time

Consider something like a conference or seminar. How often have you attended a talk or workshop and been moved by what you heard and saw, took furious notes, and walked out of the room vowing to make a big change based on what you just experienced? And then what happened? My guess is that the world outside that workshop or conference looked a lot different than it appeared in it. You had emails piled up, phone messages to return, colleagues to convince, resources to marshall, patterns to break and so on.

Among the simple reasons is that we do not protect the time and resources required to actually learn and to integrate that knowledge into what we do. As a result, we mistakenly look at the volume of ‘things’ we expose ourselves to for learning outcomes.

One solution is to embrace what consultant, writer and blogger Sarah Van Bargen calls “intentional ignorance“. This approach involves turning away from the ongoing stream of data and accepting that there are things we won’t know and that we’ll just miss. Van Bargen isn’t calling for a complete shutting of the door, rather something akin to an information sabbatical or what some might call digital sabbath. Sabbath and sabbatical share the Latin root sabbatum, which means “to rest”.

Rebecca Rosen who writes on work and business for The Atlantic argues we don’t need a digital sabbath, we need more time. Rosen’s piece points to a number of trends that are suggesting the way we work is that we’re producing more, more often and doing it more throughout the day. The problem is not about more, it’s about less. It’s also about different.

Time, by design

One of the challenges is our relationship to time in the first place and the forward orientation we have to our work. We humans are designed to look forward so it is not a surprise that we engineer our lives and organizations to do the same. Sensemaking is a process that orients our gaze to the future by looking at both the past and the present, but also by taking time to look at what we have before we consider what else we need. It helps reduce or at least manage complex information to enable actionable understanding of what data is telling us by putting it into proper context. This can’t be done by automation.

It takes time.

It means….

….setting aside time to look at the data and discuss it with those who are affected by it, who helped generate it, and are close to the action;

….taking time to gather the right kind of information, that is context-rich, measures things that have meaning and does so with appropriate scope and precision;

….understanding your enterprises’ purpose(s) and designing programs to meet such purposes, perhaps dynamically through things like developmental evaluation models and developmental design;

….create organizational incentives and protections for people to integrate what they know into their jobs and roles and to create organizations that are adaptive enough to absorb, integrate and transform based on this learning — becoming a true learning organization.

By changing the practices within an organization we can start shifting the way we learn and increase the likelihood of learning taking place.

Buying time

Imagine buying both the book and the time to read the book and think about it. Imagine sending people on courses and then giving them the tools and opportunity to try the lessons (the good ones at least) in practice within the context of the organization. If learning is really a priority, what kind of time is given to people to share what they know, listen to others, and collectively make sense of what it means and how it influences strategy?

What we might find is that we do less. We buy less. We attend less. We subscribe to less. Yet, we absorb more and share more and do more as a result.

The cost of learning then shifts — maybe even to less than we spend now — but what it means is that we factor in time not just product in our learning and knowledge production activities.

This can happen and it happens through design.

CreateYourFuture

Photo credit by Tim Sackton used under Creative Commons License via Flickr.

Abraham Lincoln quote image from TheQuotepedia.

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.