education & learningevaluation

The Quality Conundrum in Evaluation

lex-sirikiat-469013-unsplash

One of the central pillars of evaluation is assessing the quality of something, often described as its merit. Along with worth (value) and significance (importance), assessing the merit of a program, product or service is one of the principal areas that evaluators focus their energy.

However, if you think that would be something that’s relatively simple to do, you would be wrong.

This was brought home clearly in a discussion I took part in as part of a session on quality and evaluation at the recent conference of the American Evaluation Association entitled: Who decides if it’s good? How? Balancing rigor, relevance, and power when measuring program quality. The conversation session was hosted by Madeline Brandt and Kim Leonard from the Oregon Community Foundation, who presented on some of their work in evaluating quality within the school system in that state.

In describing the context of their work in schools, I was struck by some of the situational variables that came into play such as high staff turnover (and a resulting shortage among those staff that remain) and the decision to operate some schools on a four-day workweek instead of five as a means of addressing shortfalls in funding. I’ve since learned that Oregon is not alone in adopting the 4-day school week; many states have begun experimenting with it to curb costs. The argument is, presumably, that schools can and must do more with less time.

This means that students are receiving up to one fifth less classroom time each week, yet expecting to perform at the same level as those with five days. What does that mean for quality? Like much of evaluation work, it all depends on the context.

Quality in context

The United States has a long history of standardized testing, which was instituted partly as a means of ensuring quality in education. The thinking was that, with such diversity in schools, school types, and populations there needed to be some means to compare the capabilities and achievement across these contexts. A standardized test was presumed to serve as a means of assessing these attributes by creating a benchmark (standard) to which student performance could be measured and compared.

While there is a certain logic to this, standardized testing has a series of flaws embedded in its core assumptions about how education works. For starters, it assumes a standard curriculum and model of instruction that is largely one-size-fits-all. Anyone who has been in a classroom knows this is simply not realistic or appropriate. Teachers may teach the same material, but the manner in which it is introduced and engaged with is meant to reflect the state of the classroom — it’s students, physical space, availability of materials, and place within the curriculum (among others).

If we put aside the ridiculous assumption that all students are alike in their ability and preparedness to learn each day for a minute and just focus on the classroom itself, we already see the problem with evaluating quality by looking back at the 4-day school week. Four-day weeks mean either that teachers are creating short-cuts in how they introduce subjects and are not teaching all of the material they have or they are teaching the same material in a compressed amount of time, giving students less opportunity to ask questions and engage with the content. This means the intervention (i.e., classroom instruction) is not consistent across settings and thus, how could one expect things like standardized tests to reflect a common attribute? What quality education means in this context is different than others.

And that’s just the variable of time. Consider the teachers themselves. If we have high staff turnover, it is likely an indicator that there are some fundamental problems with the job. It may be low pay, poor working conditions, unreasonable demands, insufficient support or recognition, or little opportunity for advancement to name a few. How motivated, supported, or prepared do you think these teachers are?

With all due respect to those teachers, they may be incompetent to facilitate high-quality education in this kind of classroom environment. By incompetent, I mean not being prepared to manage compressed schedules, lack of classroom resources, demands from standardized tests (and parents), high student-teacher ratios, individual student learning needs, plus fitting in the other social activities that teachers participate in around school such as clubs, sports, and the arts. Probably no teachers have the competency for that. Those teachers — at least the ones that don’t quit their job — do what they can with what they have.

Context in Quality

This situation then demands new thinking about what quality means in the context of teaching. Is a high-quality teaching performance one where teachers are better able to adapt, respond to the changes, and manage to simply get through the material without losing their students? It might be.

Exemplary teaching in the context of depleted or scarce resources (time, funding, materials, attention) might look far different than if conducted under conditions of plenty. The learning outcomes might also be considerably different, too. So the link between the quality of teaching and learning outcomes is highly dependent on many contextual variables that, if we fail to account for them, will misattribute causes and effects.

What does this mean for quality? Is it an objective standard or a negotiated, relative one? Can it be both?

This is the conundrum that we face when evaluating something like the education system and its outcomes. Are we ‘lowering the bar’ for our students and society by recognizing outstanding effort in the face of unreasonable constraints or showing quality can exist in even the most challenging of conditions? We risk accepting something that under many conditions is unacceptable with one definition and blaming others for outcomes they can’t possibly achieve with the other.

From the perspective of standardized tests, the entire system is flawed to the point where the measurement is designed to capture outcomes that schools aren’t equipped to generate (even if one assumes that standardized tests measure the ‘right’ things in the ‘right’ way, which is another argument for another day).

Speaking truth to power

This years’ AEA conference theme was speaking truth to power and this situation provides a strong illustration of that. While evaluators may not be able to resolve this conundrum, what they can do is illuminate the issue through their work. By drawing attention to the standards of quality, their application, and the conditions that are associated with their realization in practice, not just theory, evaluation can serve to point to areas where there are injustices, unreasonable demands, and areas for improvement.

Rather than assert blame or unfairly label something as good or bad, evaluation, when done with an eye to speaking truth to power, can play a role in fostering quality and promoting the kind of outcomes we desire, not just the ones we get. In this way, perhaps the real measure of quality is the degree to which our evaluations do this. That is a standard that, as a profession, we can live up to and that our clients — students, teachers, parents, and society — deserve.

Image credit:  Lex Sirikiat

evaluation

Meaning and metrics for innovation

miguel-a-amutio-426624-unsplash.jpg

Metrics are at the heart of evaluation of impact and value in products and services although they are rarely straightforward. What makes a good metric requires some thinking about what the meaning of a metric is, first. 

I recently read a story on what makes a good metric from Chris Moran, Editor of Strategic Projects at The Guardian. Chris’s work is about building, engaging, and retaining audiences online so he spends a lot of time thinking about metrics and what they mean.

Chris – with support from many others — outlines the five characteristics of a good metric as being:

  1. Relevant
  2. Measurable
  3. Actionable
  4. Reliable
  5. Readable (less likely to be misunderstood)

(What I liked was that he also pointed to additional criteria that didn’t quite make the cut but, as he suggests, could).

This list was developed in the context of communications initiatives, which is exactly the point we need to consider: context matters when it comes to metrics. Context also is holistic, thus we need to consider these five (plus the others?) criteria as a whole if we’re to develop, deploy, and interpret data from these metrics.

As John Hagel puts it: we are moving from the industrial age where standardized metrics and scale dominated to the contextual age.

Sensemaking and metrics

Innovation is entirely context-dependent. A new iPhone might not mean much to someone who has had one but could be transformative to someone who’s never had that computing power in their hand. Home visits by a doctor or healer were once the only way people were treated for sickness (and is still the case in some parts of the world) and now home visits are novel and represent an innovation in many areas of Western healthcare.

Demographic characteristics are one area where sensemaking is critical when it comes to metrics and measures. Sensemaking is a process of literally making sense of something within a specific context. It’s used when there are no standard or obvious means to understand the meaning of something at the outset, rather meaning is made through investigation, reflection, and other data. It is a process that involves asking questions about value — and value is at the core of innovation.

For example, identity questions on race, sexual orientation, gender, and place of origin all require intense sensemaking before, during, and after use. Asking these questions gets us to consider: what value is it to know any of this?

How is a metric useful without an understanding of the value in which it is meant to reflect?

What we’ve seen from population research is that failure to ask these questions has left many at the margins without a voice — their experience isn’t captured in the data used to make policy decisions. We’ve seen the opposite when we do ask these questions — unwisely — such as strange claims made on associations, over-generalizations, and stereotypes formed from data that somehow ‘links’ certain characteristics to behaviours without critical thought: we create policies that exclude because we have data.

The lesson we learn from behavioural science is that, if you have enough data, you can pretty much connect anything to anything. Therefore, we need to be very careful about what we collect data on and what metrics we use.

The role of theory of change and theory of stage

One reason for these strange associations (or absence) is the lack of a theory of change to explain why any of these variables ought to play a role in explaining what happens. A good, proper theory of change provides a rationale for why something should lead to something else and what might come from it all. It is anchored in data, evidence, theory, and design (which ties it together).

Metrics are the means by which we can assess the fit of a theory of change. What often gets missed is that fit is also context-based by time. Some metrics have a better fit at different times during an innovation’s development.

For example, a particular metric might be more useful in later-stage research where there is an established base of knowledge (e.g., when an innovation is mature) versus when we are looking at the early formation of an idea. The proof-of-concept stage (i.e., ‘can this idea work?’) is very different than if something is in the ‘can this scale’? stage. To that end, metrics need to be fit with something akin to a theory of stage. This would help explain how an innovation might develop at the early stage versus later ones.

Metrics are useful. Blindly using metrics — or using the wrong ones — can be harmful in ways that might be unmeasurable without the proper thinking about what they do, what they represent, and which ones to use.

Choose wisely.

Photo by Miguel A. Amutio on Unsplash

evaluationinnovation

Understanding Value in Evaluation & Innovation

ValueUnused.jpg

Value is literally at the root of the word evaluation yet is scarcely mentioned in the conversation about innovation and evaluation. It’s time to consider what value really means for innovation and how evaluation provides answers.

Design can be thought of as the discipline — the theory, science, and practice — of innovation. Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

The connection between worth and value in design was discussed in a recent article by Jon Kolko from Modernist Studio. He starts from the premise that many designers conceive of value as the price people will pay for something and points to the dominant orthodoxy in SAAS applications  “where customers can choose between a Good, Better, and Best pricing model. The archetypical columns with checkboxes shows that as you increase spending, you “get more stuff.””

Kolko goes on to take a systems perspective of the issue, noting that much value that is created through design is not piecemeal, but aggregated into the experience of whole products and services and not easily divisible into component parts. Value as a factor of cost or price breaks down when we apply a lens to our communities, customers, and clients as mere commodities that can be bought and sold.

Kolko ends his article with this comment on design value:

Design value is a new idea, and we’re still learning what it means. It’s all of these things described here: it’s cost, features, functions, problem solving, and self-expression. Without a framework for creating value in the context of these parameters, we’re shooting in the dark. It’s time for a multi-faceted strategy of strategy: a way to understand value from a multitude of perspectives, and to offer products and services that support emotions, not just utility, across the value chain.

Talking value

It’s strange that the matter of value is so under-discussed in design given that creating value is one of its central tenets. What’s equally as perplexing is how little value is discussed as a process of creating things or in their final designed form. And since design is really the discipline of innovation, which is the intentional creation of value using something new, evaluation is an important concept in understanding design value.

One of the big questions professional designers wrestle with at the start of any engagement with a client is: “What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking: “What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders. This redefinition of value as an active concept gives the opportunity to see value in new places and not waste it.

Image Credit: Value Unused = Waste by Kevin Krejci adapted under Creative Commons 2.0 License via Flickr

Note: If you’re looking to hire evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

business

Strategy: Myths, fantasies, and reality

paul-skorupskas-59950-unsplash.jpg

A defining feature of sustained excellence in any enterprise is a good strategy — a vision and plan linked to the delivery of something of value, consistently. One of the big reasons many organizations fail to thrive is not just that they that have the wrong strategy, but that they don’t have one at all (but think they do). 

Strategy is all about perception.

Whether you think you have one or not is partly perceptive. Whether you are delivering a strategy in practice or not is also a matter of perception. Why? Because strategy is what links what you build your organization for, what you drive it toward, and what you actually achieve. Lots of organizations achieve positive results by happenstance (being at the right place at the right time). That kind of luck can happen to anyone, but it hardly constitutes a strategy.

Also, statements of intent are great for creating the perception of strategy because one can always say they are working toward something in the abstract, but without a clear sense of how intentions are connected to actions and those actions connected to outcomes, there really isn’t a strategy.

Do you have a strategy?

The best example of this is in the entertaining and instructive illustrative book ‘I Have a Strategy (No You Don’t)‘, Howell J. Malham Jr literally illustrates the problems that beset conversations about strategy as it chronicles two characters (Larry and Gary) talking about the subject and busting the myths associated with what strategy is and is not. One exchange between the two goes like this:

Larry: “Hey Gary, I was working a strategy to put a cookie back in a cookie jar but I tripped and fell and the cookie flew into my mouth instead. Good strategy, huh?

Gary: “That’s not a strategy. That’s a happy accident, Larry

The entire book is like this. One misconception after another is clarified through one character using the term strategy to mean something other than what it really is. These misconceptions, misuses, and mistakes with the concept of strategy may be why it is so poorly done in practice.

Malham’s work is my favourite on strategy because it encapsulates so many of the real-world conversations I witness (and have been a part of) for years with colleagues and clients alike. Too much conversation on strategy is about things that are not really about strategy at all like wishes, needs, or opportunities.

This isn’t to suggest that all outcomes are planned or connected to a strategy, but the absence of a strategy means you’re operating at the whim of chance, circumstance, and opportunism. This is hardly the stuff of inspiration and isn’t sustainable. Strategy is about connecting purpose, plans, execution, and delivery. Malham defines a strategy as having the following properties:

1. It has an intended purpose;
2. There is a plan;
3. There is a sequence of actions (interdependent events);
4. It leads toward a distinct, measurable goal

When combined with evaluation, organizations build a narrative and understanding of not only whether a strategy leads toward a goal, but what actions make a difference (and to what degree), what aspects of a plan fit and didn’t fit, and what outcomes emerge from the efforts (including those that were unintended).

A look at much of the discourse on strategy finds that many organizations not only don’t have strategic plans, they don’t even have plans.

Words and action

One of the biggest problems with “capital ‘S’ Strategy” (the kind espoused in management science) is that it is filled with jargon and, ironically, contributes greatly to the very lack of strategic thinking that it seeks to inspire. It’s one of the reasons I like Malham’s book: it cuts through the jargon. I used to work with a senior leader who used all the language of strategy in talks, presentations, and writing but was wholly incapable or unwilling to commit to a strategic direction when it came to discussing plans and actions for their organization.

Furthermore, it is only marginally useful if you develop a strategy and then don’t bother to evaluate it to see what happened, how, and to what effect. Without the action tied to strategy, it is no better than a wish list and probably no more useful than a New Years Resolution.

Those plans and linking them to action is why design is such an important — and sadly, highly neglected — part of strategy development. Design is that process of shifting how we see problems, explore possibilities, and create pathways that lead to solutions. Design is not theoretical, it is practical and without design doing design thinking is impotent.

Two A’s of Strategy: Adaptation vs Arbitrary

The mistake for organizations working in zones of high complexity (which is increasingly most of those working with human services) is assuming that strategy needs to be locked in place and executed blindly to be effective. Strategy is developed in and for a context and if that situation changes, the strategy needs to change, too. This isn’t about throwing it out but adapting.

Adaptive strategy is a means of innovating responsibly, but can also be a trap if those adaptations need to be built on data and experience, not spurious conclusions. Arbitrary decisions is what often is at the root of bad (or no) strategy.

Roger Martin is one of the brightest minds on strategy and has called out what he sees as sloppy use of the term adaptive strategy as a stand-in for arbitrary decision-making going so far as to call it a ‘cop-out’. One of the biggest problems is that strategy is often not viewed in systems terms, as part of an interconnected set of plans, actions, and evaluations made simultaneously, not sequentially.

Good strategy is not a set of steps, but a set of cascading choices that influence the operations and outcomes simultaneously. Strategy is also about being active, not passive, about what it means to design and create an organization.

Grasping strategy for what it is, not what we imagine it to be, can be a key factor in shaping not only what you do, but how well you do it. Having the kind of conversations like those in Howell J. Malham’s book is a means to get things moving. Taking action on those things is another.

 

Image credit: Photo by Paul Skorupskas on Unsplash

behaviour changebusinessdesign thinking

How do we sit with time?

IMG_1114.jpg

Organizational transformation efforts from culture change to developmental evaluation all depend on one ingredient that is rarely discussed: time. How do we sit with this and avoid the trap of aspiring for greatness while failing to give it the time necessary to make change a reality? 

Toolkits are a big hit with those looking to create change. In my years of work with organizations large and small supporting behaviour change, innovation, and community development there are few terms that light up people’s faces than hearing “toolkit”. Usually, that term is mentioned by someone other than me, but it doesn’t stop the palpable excitement at the prospect of having a set of tools that will solve a complex problem.

Toolkits work with simple problems. A hammer works well with nails. Drills are good at making holes. With enough tools and some expertise, you can build a house. Organizational development or social change is a complex challenge where tools don’t have the same the same linear effect. A tool — a facilitation technique, an assessment instrument, a visualization method — can support change-making, but the application and potential outcome of these tools will always be contextual.

Tools and time

My experience has been that people will go to great extents to acquire tools yet put little comparative effort to use them.  A body of psychological research has shown there are differences between goals, the implementation intentions behind them, and actual achievement of those goals. In other words: desiring change, planning and intending to make a change, and actually doing something are different.

Tools are proxies for this issue in many ways: having tools doesn’t mean they either get used or that they actually produce change. Anyone in the fitness industry knows that the numbers between those who try a workout, those who buy a membership to a club, and those who regularly show up to workout are quite different.

Or consider the Japanese term Tsundoku, which loosely translates into the act of acquiring reading materials and letting them pile up in one’s home without reading them.

But tools are stand-ins for something far more important and powerful: time.

The pursuit of tools and their use is often hampered because organizations do not invest in the time to learn, appropriately apply, refine, and sense-make the products that come through these tools.

A (false) artifact of progress

Bookshelf

Consider the book buying or borrowing example above: we calculate the cost of the book when really we ought to price out the time required to read it. Or, in the case of practical non-fiction, the cost to read it and apply the lessons from it.

Yet, consider a shelf filled with books before you providing the appearance of having the knowledge contained within despite any evidence that its contents have been read. This is the same issue with tools: once acquired it’s easier to assume the work is largely done. I’ve seen this firsthand with people doing what the Buddhist phrase decries:

“Do not confuse the finger pointing to the moon for the moon itself”

It’s the same confusion we see between having data or models and the reality they represent.

These things all represent artifacts of progress and a false equation. More books or data or better models do not equal more knowledge. But showing that you have more of something tangible is a seductive proxy. Time has no proxy; that’s the biggest problem.

Time just disappears, is spent, is used, or whatever metaphor you choose to use to express time. Time is about Kairos or Chronos, the sequence of moments or the moments themselves, but in either case, they bear no clear markers.

Creating time markers

There are some simple tricks to create the same accumulation effect in time-focused work — tools often used to support developmental evaluation and design. Innovation is as much about the process as it is the outcome when it comes to marking effort. The temptation is to focus on the products — the innovations themselves — and lose what was generated to get there. Here are some ways to change that.

  1. Timelines. Creating live (regular) recordings of what key activities are being engaged and connecting them together in a timeline is one way to show the journey from idea to innovation. It also provides a sober reminder of the effort and time required to go through the various design cycles toward generating a viable prototype.
  2. Evolutionary Staging. Document the prototypes created through photographs, video, or even showcasing versions (in the case of a service or policy where the visual element isn’t as prominent). This is akin to the March of Progress image used to show human evolution. By capturing these things and noting the time and timing of what is generated, you create an artifact that shows the time that was invested and what was produced from that investment. It’s a way to honour the effort put toward innovation.
  3. Quotas & Time Targets. I’m usually reluctant to prescribe a specific amount of time one should spend on reflection and innovation-related sensemaking, but it’s evident from the literature that goals, targets, and quotas work as effective motivators for some people. If you generate a realistic set of targets for thoughtful work, this can be something to aspire to and use to drive activity. By tracking the time invested in sensemaking, reflection, and design you better can account for what was done, but also create the marker that you can point to that makes time seem more tangible.

These are three ways to make time visible although it’s important to remember that the purpose isn’t to just accumulate time but to actually sit with it.

All the tricks and tools won’t bring the benefit of what time can offer to an organization willing to invest in it, mindfully. Except, perhaps, a clock.

Try these out with some simple tasks. Another is to treat time like any other resource: budget it. Set aside the time in a calendar by booking key reflective activities in just as you would anything else. To do this, and to keep to it, requires leadership and the organizational supports necessary to ensure that learning can take place. Consider what is keeping you from taking or making the time to learn, share those thoughts with your peers, and then consider how you might re-design what you do and how you do it to support that learning.

Take time for that, and you’re on your way to something better.

 

If you’re interested in learning more about how to do this practically, using data, and designing the conditions to support innovation, contact me. This is the kind of stuff that I do. 

 

 

 

 

 

complexityevaluationsocial innovation

Developmental Evaluation’s Traps

IMG_0868.jpg

Developmental evaluation holds promise for product and service designers looking to understand the process, outcomes, and strategies of innovation and link them to effects. It’s the great promise of DE that is also the reason to be most wary of it and beware the traps that are set for those unaware.  

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

This evidence might include the kind of information that helps an organization know when to scale up effort, change direction (“pivot”), or abandon a strategy altogether.

Powerful stuff.

Except, it can also be a trap.

It’s a Trap!

Star Wars fans will recognize the phrase “It’s a Trap!” as one of special — and much parodied — significance. Much like the Rebel fleet’s jeopardized quest to destroy the Death Star in Return of the Jedi, embarking on a DE is no easy or simple task.

DE was developed by Michael Quinn Patton and others working in the social innovation sector in response to the needs of programs operating in areas of high volatility, uncertainty, complexity, and ambiguity in helping them function better within this environment through evaluation. This meant providing the kind of useful data that recognized the context, allowed for strategic decision making with rigorous evaluation and not using tools that are ill-suited for complexity to simply do the ‘wrong thing righter‘.

The following are some of ‘traps’ that I’ve seen organizations fall into when approaching DE. A parallel set of posts exploring the practicalities of these traps are going up on the Cense site along with tips and tools to use to avoid and navigate them.

A trap is something that is usually camouflaged and employs some type of lure to draw people into it. It is, by its nature, deceptive and intended to ensnare those that come into it. By knowing what the traps are and what to look for, you might just avoid falling into them.

A different approach, same resourcing

A major trap is going into a DE is thinking that it is just another type of evaluation and thus requires the same resources as one might put toward a standard evaluation. Wrong.

DE most often requires more resources to design and manage than a standard program evaluation for many reasons. One the most important is that DE is about evaluation + strategy + design (the emphasis is on the ‘+’s). In a DE budget, one needs to account for the fact that three activities that were normally treated separately are now coming together. It may not mean that the costs are necessarily more (they often are), but that the work required will span multiple budget lines.

This also means that operationally one cannot simply have an evaluator, a strategist, and a program designer work separately. There must be some collaboration and time spent interacting for DE to be useful. That requires coordination costs.

Another big issue is that DE data can be ‘fuzzy’ or ambiguous — even if collected with a strong design and method — because the innovation activity usually has to be contextualized. Further complicating things is that the DE datastream is bidirectional. DE data comes from the program products and process as well as the strategic decision-making and design choices. This mutually influencing process generates more data, but also requires sensemaking to sort through and understand what the data means in the context of its use.

The biggest resource that gets missed? Time. This means not giving enough time to have the conversations about the data to make sense of its meaning. Setting aside regular time at intervals appropriate to the problem context is a must and too often organizations don’t budget this in.

The second? Focus. While a DE approach can capture an enormous wealth of data about the process, outcomes, strategic choices, and design innovations there is a need to temper the amount collected. More is not always better. More can be a sign of a lack of focus and lead organizations to collect data for data’s sake, not for a strategic purpose. If you don’t have a strategic intent, more data isn’t going to help.

The pivot problem

The term pivot comes from the Lean Startup approach and is found in Agile and other product development systems that rely on short-burst, iterative cycles with accompanying feedback. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. Sounds good, right?

It is, except when the results aren’t well-grounded in data. DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. I once heard the explanation given by an educator about how his team was so good at pivoting their strategy for how they were training their clients and students. They were taking a developmental approach to the course (because it was on complexity and social innovation). Yet, I knew that the team — a group of highly skilled educators — hadn’t spent nearly enough time coordinating and planning the course.

There are times when a presenter is putting things last minute into a presentation to capitalize on something that emerged from the situation to add to the quality of the presentation and then there is someone who has not put the time and thought into what they are doing and rushing at the last minute. One is about a pivot to contribute to excellence, the other is not executing properly. The trap is confusing the two.

Fearing success

“If you can’t get over your fear of the stuff that’s working, then I think you need to give up and do something else” – Seth Godin

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Innovation affects the things it touches in ways that might not be foreseen. It also means recognizing that things will have to change in order to accommodate the success of whatever innovation you develop. But change can be hard to adjust to even when it is what you wanted.

It’s a strange truth that many non-profits are designed to put themselves out of business. If there were no more political injustices or human rights violations around the world there would be no Amnesty International. The World Wildlife Fund or Greenpeace wouldn’t exist if the natural world were deemed safe and protected. Conversely, there are no prominent NGO’s developed to eradicate polio anymore because pretty much have….or did we?

Self-sabotage exists for many reasons including a discomfort with change (staying the same is easier than changing), preservation of status, and a variety of inter-personal, relational reasons as psychologist Ellen Hendrikson explains.

Seth Godin suggests you need to find something else if you’re afraid of success and that might work. I’d prefer that organizations do the kind of innovation therapy with themselves, engage in organizational mindfulness, and do the emotional, strategic, and reflective work to ensure they are prepared for success — as well as failure, which is a big part of the innovation journey.

DE is a strong tool for capturing success (in whatever form that takes) within the complexity of a situation and the trap is when the focus is on too many parts or ones that aren’t providing useful information. It’s not always possible to know this at the start, but there are things that can be done to hone things over time. As the saying goes: when everything is in focus, nothing is in focus.

Keeping the parking brake on

And you may win this war that’s coming
But would you tolerate the peace? – “This War” by Sting

You can’t drive far or well with your parking brake on. However, if innovation is meant to change the systems. You can’t keep the same thinking and structures in place and still expect to move forward. Developmental evaluation is not just for understanding your product or service, it’s also meant to inform the ways in which that entire process influences your organization. They are symbiotic: one affects the other.

Just as we might fear success, we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead. Sports teams face this problem in reframing their mission after winning a championship. The same thing is true for organizations.

This is why building a culture of innovation is so important with DE embedded within that culture. Innovation can’t be considered a ‘one-off’, rather it needs to be part of the fabric of the organization. If you set yourself up for change, real change, as a developmental organization, you’re more likely to be ready for the peace after the war is over as the lyric above asks.

Sealing the trap door

Learning — which is at the heart of DE — fails in bad systems. Preventing the traps discussed above requires building a developmental mindset within an organization along with doing a DE. Without the mindset, its unlikely anyone will avoid falling through the traps described above. Change your mind, and you can change the world.

It’s a reminder of the needs to put in the work to make change real and that DE is not just plug-and-play. To quote Martin Luther King Jr:

“Change does not roll in on the wheels of inevitability, but comes through continuous struggle. And so we must straighten our backs and work for our freedom. A man can’t ride you unless your back is bent.”

 

For more on how Developmental Evaluation can help you to innovate, contact Cense Ltd and let them show you what’s possible.  

Image credit: Author

behaviour changebusinesspublic healthsocial mediasystems science

Genetic engineering for your brand

shutterstock_551281720.jpg

DNA doesn’t predetermine our future as biological beings, but it does powerfully influence it. Some have applied the concept of ‘DNA’ to a company or organization, in the same way, it’s applied to biological organisms. Firms like PWC have been at the forefront of this approach, developing organizational DNA assessments and outlining the principles that shape the DNA of an organization. A good brand is an identity that you communicate with yourself and the world around you. A healthy brand is built on healthy DNA.

Tech entrepreneur and writer Om Malik sees DNA as being comprised of those people that form the organization:

DNA contains the genetic instructions used to build out the cells that make up an organism. I have often argued that companies are very much like living organisms, comprised of the people who work there. What companies make, how they sell and how they invent are merely an outcome of the people who work there. They define the company.

The analogy between the DNA of a company as being that of those who make it up is apt because, as he points out, organizations reflect the values, habits, mindsets, and focus of those who run them. For that reason, understanding your organizations’ DNA structure might be critical to shaping the corporate direction, brand and promoting any type of change, as we see from the case of Facebook.

DNA dilemma: The case of Facebook

Facebook is under fire these days. To anyone paying enough attention to the social media giant the issue with Facebook isn’t that it’s happening now, but why it hasn’t happened sooner? Back when the site was first opened up to allow non-university students to have accounts (signaling what would become the global brand it is today) privacy was a big concern. I still recall listening to a Facebook VP interviewed on a popular tech podcast who basically sloughed off any concerns the interviewer had about privacy saying the usual “we take this seriously” stuff but offering no example of how that was true just as the world was about to jump on the platform. I’ve heard that same kind of interview repeated dozens of times since the mid-2000’s, including just nine months before Mark Zuckerberg’s recent ‘mea culpa’ tour.

Facebook has never been one to show much (real) attention to privacy because its business model is all about ensuring that users’ are as open as possible to collect as much data as possible from them to sell as many services to them, through them, about them, and for others to manipulate. The Cambridge Analytica story simply exposed what’s been happening for years to the world.

Anyone who’s tried to change their privacy settings knows that you need more than a Ph.D. to navigate them* and, even then, you’re unlikely to be successful. Just look at the case of Bobbi Duncan and Katie McCormick who were outed as gay to their families through Facebook even though they had locked down their own individual privacy settings. This is all part of what CEO Mark Zuckerberg and the folks at Facebook refer to as “connecting the social graph.”

The corporate biology of addiction

In a prescient post, Om Malik wrote about Facebook’s addiction to its business model based on sharing, openness, and exploitation of its users’ information mere weeks before the Cambridge Analytica story came out.

Facebook’s DNA is that of a social platform addicted to growth and engagement. At its very core, every policy, every decision, every strategy is based on growth (at any cost) and engagement (at any cost). More growth and more engagement means more data — which means the company can make more advertising dollars, which gives it a nosebleed valuation on the stock market, which in turn allows it to remain competitive and stay ahead of its rivals.

Whether he knew it or not, Malik was describing an epigenetic model of addiction. Much emerging research on addiction has pointed to a relationship between genes and addictive behaviour. This is a two-way street where genes influence behaviour and behaviour influences a person’s genes (something called epigenetics). The more Facebook seeks to connect through its model, the more it reinforces the behaviour, the more it feels a ‘need’ to do it and therefore repeats it.

In systems terms, this is called a reinforcing loop and is part of a larger field of systems science called systems dynamics. Systems dynamics have been applied to public health and show how we can get caught in traps and the means we use to get out of them.  By applying an addiction model and system dynamics to the organization, we might better understand how some organizations change and how some don’t.

Innovation therapy

The first step toward any behaviour change for an addiction is to recognize the addiction in the first place. Without acknowledgment of a problem, there can’t be much in the way of self-support. This acknowledgment has to be authentic, which is why there is still reason to question whether Facebook will change.

There are many paths to addiction treatment, but the lessons from treating some of the most pernicious behaviours like cigarette smoking and alcohol suggest that it is likely to succeed when a series of small, continuous, persistent changes are made and done so in a supportive environment. One needs to learn from each step taken (i.e., evaluate progress and outcomes from each step), to integrate that learning, and continue through the inevitable cycling through stages (non-linear change) that sometimes involves moving backward or not knowing where along the change journey you are.

Having regulations or external pressures to change can help, but too much can paralyze action and stymie creativity. And while being motivated to change is important, sometimes it helps to just take action and let the motivation follow.

If this sounds a lot like the process of innovation, you’re right.

Principled for change

Inspiring change in an organization, particularly one where there is a clear addiction to a business model (a way of doing things, seeing things, and acting) requires the kind of therapy that we might see in addiction support programs. Like those programs, there isn’t one way to do it, but there are principles that are common. These include:

  1. Recognize the emotional triggers involved. Most people suffering from addictions can rationalize the reasons to change, but the emotional reasons are a lot harder. Fear, attraction, and the risk of doing things differently can bubble up when you least expect it. You need to understand these triggers, deal with the emotional aspects of them — the baggage we all bring.
  2. Change your mindset. Successful innovation involves a change of practice and a change of mindset. The innovator’s mindset goes from a linear focus on problems, success, and failure to a non-linear focus on opportunities, learning, and developmental design.  This allows you to spot the reinforcing looping behaviour and addiction pathways as well as what other pathways are open to you.
  3. Create better systems, not just different behaviour. Complex systems have path-dependencies — those ruts that shape our actions, often unconsciously and out of habit. Consider ways you organize yourself, your organization’s jobs and roles, the income streams, the system of rewards and recognitions, the feedback and learning you engage with, and composition of your team.  This rethinking and reorganization are what changes DNA, otherwise, it will continue to express itself through your organization in the same way.
  4. Make change visible. Use evaluation as a means to document what you do and what it produces and continue to structure your work to serve the learning from this. Inertia comes from having no direction and nothing to work toward. We are beings geared towards constant motion and making things — it’s what makes us human. Make a change, by design. Make it visible through evaluation and visual thinking – including the ups, downs, sideways. A journey involves knowing where you are — even if that’s lost — and where you’re going (even if that changes).

Change is far more difficult than people often think. Change initiatives that are rooted solely in motivation are unlikely to produce anything sustainable. You need to get to the root, the DNA, of your organization and build the infrastructure around it to enable it to do the work with you, not against you. That, in Facebook terms, is something your brand and its champions will truly ‘Like’.

 

* Seriously. I have a Ph.D. and am reasonably tech literate and have sat down with others with similar educational backgrounds — Ph.D.’s, masters degrees, tech startup founders — and we collectively still couldn’t figure out the privacy settings as a group.

References: For those interested in system dynamics or causal loop modeling, check out this great primer from Nate Osgood at the University of Saskatchewan. His work is top-notch. Daniel Kim has also written some excellent, useful, and practical stuff on applying system dynamics to a variety of issues.

Image credit: Shutterstock used under license.