Category: design thinking

businessdesign thinkingevaluationinnovation

Designing for the horizon

6405553723_ac7f814b90_o

‘New’ is rarely sitting directly in front of us, but on the horizon; something we need to go to or is coming towards us and is waiting to be received. In both cases this innovation (doing something new for benefit) requires different action rather than repeated action with new expectations. 

I’ve spent time in different employment settings where my full-time job was to work on innovation, whether that was through research, teaching or some kind of service contribution, yet found the opportunities to truly innovate relatively rare. The reason is that these institutions were not designed for innovation, at least in the current sense of it. They were well established and drew upon the practices of the past to shape the future, rather than shape the future by design. Through many occasions even when there was a chance to build something new from the ground up — a unit, centre, department, division, school  — the choice was to replicate the old and hope for something new.

This isn’t how innovation happens. One of those who understood this better than most is Peter Drucker. A simple search through some of the myriad quotes attributed to him will find wisdom pertaining to commitment, work ethic, and management that is unparalleled. Included in that wisdom is the simple phrase:

If you want something new, you have to stop doing something old

Or, as the quote often attributed to Henry Ford suggests:

Do what you’ve always done and you will get what you’ve always got

Design: An intrapreneurial imperative

In each case throughout my career I’ve chosen to leave and pursue opportunities that are more nimble and allow me to really innovation with people on a scale appropriate to the challenge or task. Yet, this choice to be nimble often comes at the cost of scale, which is why I work as a consultant to support larger organizations change by bringing the agility of innovation to those institutions not well set up for it (and help them to set up for it).

There are many situations where outside support through someone like a consultant is not only wise, but maybe the only option for an organization needing fresh insight and expertise. Yet, there are many situations where this is not the case. In his highly readable book The Designful Company, Marty Neumeier addresses the need for creating a culture of non-stop innovation and ways to go about it.

At the core of this approach is design. As he states:

If you want to innovate, you gotta design

 

Design is not about making things look pretty, it’s about making things stand out or differentiating them from others in the marketplace**. (Good) Design is what makes these differentiated products worthy of consideration or adoption because they meet a need, satisfy a desire or fill a gap somewhere. As Neumeier adds:

Design contains the skills to identify possible futures, invent exciting products, , build bridges to customers, crack wicked problems, and more.

When posed this way, one is left asking: Why is everyone not trained as a designer? Or put another way: why aren’t most organizations talking about design? 

Being mindful of the new

This brings us back to Peter Drucker and another pearl of wisdom gained from observing the modern organization and the habits that take place within it:

Follow effective action with quiet reflection. From the quiet reflection will come even more effective action.

This was certainly not something that was a part of the institutional culture of the organizations I was a part of and it’s not part of many of the organizations I worked with. The rush to do, to take action, is rarely complemented with reflection because it is inaction. While I might have created habits of reflective practice in my work as an individual, that is not sufficient to create change in an organization without some form of collective or at least shared reflection.

To test this out, ask yourself the following questions of the workplace you are a part of:

  • Do you hold regular, timely gatherings for workers to share ideas, discuss challenges and explore possibilities without an explicit outcome attached to the agenda?
  • Is reflective practice part of the job requirements of individuals and teams where there is an expectation that it is done and there are performance review activities attached to such activity? Or is this a ‘nice to have’ or ‘if time permits’ kind of activity?
  • Are members of an organization provided time and resources to deliver on any expectations of reflective practice both individually or collectively?
  • Are other agenda items like administrative, corporate affairs, news, or ’emergencies’ regularly influencing and intruding upon the agendas of gatherings such as strategic planning meetings or reflection sessions?
  • Is evaluation a part of the process of reflection? Do you make time to review evaluation findings and reflect on their meaning for the organization?
  • Do members of the organization — from top to bottom — know what reflection is or means in the context of their work?
  • Does the word mindfulness come into conversations in the organization at any time in an official capacity?

Designing mindfulness

If innovation means design and effective action requires reflection it can be surmised that designing mindfulness into the organization can yield considerable benefits. Certain times of year, whether New Years Day (like today’s posting date), Thanksgiving (in Canada & US), birthdays, anniversaries, religious holidays or even the end of the fiscal year or quarter, can prompt some reflection.

Designing mindfulness into an organization requires taking that same spirit that comes from these events and making them regular. This means protecting time to be mindful (just as we usually take time off for holidays), including regular practices into the workflow much like we do with other activities, and including data (evaluation evidence) to support that reflection and potentially guide some of that reflection. Sensemaking time to bring that together in a group is also key as is the potential to use design as a tool for foresight and envisioning new futures.

To this last point I conclude with another quote attributed to Professor Drucker:

The best way to predict your future is to create it

As you begin this new year, new quarter, new day consider how you can design your future and create the space in your organization — big or small — to reflect and act more mindfully and effectively.

12920057704_2e9695665b_b

** This could be a marketplace of products, services, ideas, attention or commitment.

Photo credit: Hovering on the Horizon by the NASA Earth Observatory used under Creative Commons Licence via Flickr

design thinkingevaluationsocial innovation

The Ecology of Innovation: Part 1 – Ideas

Innovation Ecology

Innovation Ecology

There is a tendency when looking at innovation to focus on the end result of a process of creation rather than as one node in a larger body of activity, yet expanding our frame of reference to see these connections innovation starts to look look much more like an ecosystem than a simple outcome. This first in a series examines innovation ecology from the place of ideas.

Ideas are the kindling that fuels innovation. Without good ideas, bold ideas, and ideas that have the potential to move our thinking, actions and products further we are left with the status quo: what is, rather than what might be.

What is often missed in the discussion of ideas is the backstory and connections between thoughts that lead to the ideas that may eventually lead to something that becomes an innovation*. This inattention to (or unawareness of) this back story might contribute to reasons why many think they are uncreative or believe they have low innovation potential. Drawing greater attention to these connections and framing that as part of an ecosystem has the potential to not only free people from the tyrrany of having to create the best ideas, but also exposes the wealth of knowledge generated in pursuit of those ideas.

Drawing Connections

Connections  is the title of a book by science historian James Burke that draws on his successful British science documentary series that first aired in the 1970’s and was later recreated in the mid 1990’s. The premise of the book and series is to show how ideas link to one another and build on one another to yield the scientific insights that we see. By viewing ideas in a collective realm, we see how they can and do connect, weaving together a tapestry of knowledge that is far more than the sum of the parts within it.

Too often we see the celebration of innovation as focused on the parts – the products. This is the iPhone, the One World Futbol, the waterless toilet, the intermittent windshield wiper or a process like the Lean system for quality improvement or the use of checklists in medical care. These are the ideas that survive.

The challenge with this perspective on ideas is that it appears to be all-or-nothing: either the idea is good and works or it is not and doesn’t work.

This latter means of thinking imposes judgement on the end result, yet is strangely at odds with innovation itself. It is akin to judging flour, salt, sugar or butter to be bad because a baker’s cake didn’t turn out. Ideas are the building blocks – the DNA if you will — of innovations. But, like DNA (and RNA), it is only in their ability to connect, form and multiply that we really see innovation yield true benefit at a system level. Just like the bakers’ ingredient list, ideas can serve different purposes to different effects in different contexts and the key is knowing (or uncovering) what that looks like and learning what effect it has.

From ideas to ecologies

An alternative to the idea-as-product perspective is to view it as part of a wider system. This takes James Burke’s connections to a new level and actually views ideas as part of a symbiont, interactive, dynamic set of relations. Just like the above example of DNA, there is a lot of perceived ‘junk’ in the collection that may have no obvious benefit, yet by its existence enables the non-junk to reveal and produce its value.

This biological analogy can extend further to the realm of systems. The term ecosystem embodies this thinking:

ecosystem |ˈekōˌsistəm, ˈēkō-| {noun}

Ecology

a biological community of interacting organisms and their physical environment.

• (in general use) a complex network or interconnected system: Silicon Valley’s entrepreneurial ecosystem | the entire ecosystem of movie and video production will eventually go digital.

Within this perspective on biological systems, is the concept of ecology:

ecology |iˈkäləjē| {noun}

1 the branch of biology that deals with the relations of organisms to one another and to their physical surroundings.

2 (also Ecology) the political movement that seeks to protect the environment, especially from pollution.

What is interesting about the definitions above, drawn from the Oxford English Dictionary, is that they focus on biology, the discipline where it first was explored and studied. The definition of biology used in the Wikipedia entry on the topic states:

Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy.[1]

Biologists do not look at ecosystems and decide which animals, plants, environments are good or bad and proceed to discount them, rather they look at what each brings to the whole, their role and their relationships. Biology is not without evaluative elements as judgement is still applied to these ‘parts’ of the system as there are certain species, environments and contexts that are more or less beneficial for certain goals or actors/agents in the system than others, but judgement is always contextualized.

Designing for better idea ecologies

Contextual learning is part of sustainable innovation. Unlike natural systems, which function according to hidden rules (“the laws of nature”) that govern ecosystems, human systems are created and intentional; designed. Many of these systems are designed poorly or with little thought to their implications, but because they are designed we can re-design them. Our political systems, social systems, living environments and workplaces are all examples of human systems. Even families are designed systems given the social roles, hierarchies, expectations and membership ‘rules’ that they each follow.

If humans create designed systems we can do the same for the innovation systems we form. By viewing ideas within an ecosystem as part of an innovation ecosystem we offer an opportunity to do more with what we create. Rather than lead a social Darwinian push towards the ‘best’ ideas, an idea ecosystem creates the space for ideas to be repurposed, built upon and revised over time. Thus, our brainstorming doesn’t have to end with whatever we come up with at the end (and may hate anyway), rather it is ongoing.

This commitment to ongoing ideation, sensemaking and innovation (and the knowledge translation, exchange and integration) is what distinguishes a true innovation ecosystem from a good idea done well. In future posts, we’ll look at this concept of the ecosystem in more detail.

Brainstorming Folly

Brainstorming Folly

Tips and Tricks:

Consider recording your ideas and revisiting them over time. Scheduling a brief moment to revisit your notebooks and content periodically and regularly keeps ideas alive. Consider the effort in brainstorming and bringing people together as investments that can yield returns over time, not just a single moment. Shared Evernote notebooks, Google Docs, building (searchable) libraries of artifacts or regular revisiting of project memos can be a simple, low-cost and high-yield way to draw on your collective intellectual investment over time.

* An innovation for this purpose is a new idea realized for benefit.

Image Credits: Top: Evolving ecology of the book (mindmap) by Artefatica used under Creative Commons License from Flickr.

Bottom: Brainstorming Ideas from Tom Fishburne (Marketoonist) used under commercial license.

art & designdesign thinkinginnovation

Creativity and Craft, Myth and Muscle

Applying Creative Muscle

Applying Creative Muscle

Creativity is a word shrouded in myth that has been held up as this elusive, seductive object that will reveal the true secrets of innovation if ever reached. Creativity is something we all have, but not all of us are craftspeople and knowing where these two are separate and meet is the difference between myth and the muscles needed to turn creativity into innovations.  

A tour of blogs, journals, and magazines that cover innovation from Inc, Fast Company, Harvard Business Review, The Atlantic, Entrepreneur and all the way to Brain Pickings will find one topic more visible than most: creativity.

Creativity is one of those terms that everyone knows, many use, has multiple meanings and is highly dependent on person and context. It’s also something that many of us feel we lack. This is not surprising given the way we set our schools and workplaces up as Sir Ken Robinson has discussed throughout his career.

Robinson has delivered perhaps one of the best and certainly most viewed talks on this at TED a few years ago illustrating the ways creativity gets ‘schooled’ out of us early on:

Accessing creativity

A look at the evidence base — which is enormous, unstructured, and varied in quality and scope — finds that creativity is hardly the mythical thing it gets made out to be and, following Sir Ken’s points raised in his TED talk, something we all have in us that may simply be hidden. More than anyone, Dr Keith Sawyer knows this having put together perhaps the strongest collection of evidence for the application of creativity in his books Explaining Creativity and, more recently, Zig Zag. (Both books are highly recommended).

Sawyer dispels such myths of the creative genius or the “flash of insight” as a linear process, rather pointing to creativity as often the cultivation of practices and habits that people go through to generate insights and products. This ‘zig zag’ represents metaphorically taking switchbacks to climb a mountain rather than going straight uphill. As you engage in creative thinking and action you build a deeper knowledge base, hone and acquire skills, and simply become more creative. “Creative people” are those that engage in these practices, build the habits of mind of creativity, and persist through each zig and zag along the way.

Design and design thinking is often associated with creativity because it is, in part, about creatively finding, framing and addressing problems through a structured process of inquiry, prototyping and revision. David and Tom Kelley in their recent book Creative Confidence point to design thinking as a layered foundation that is what much creativity is built upon. The disciplined, guided process that design thinking (well applied) offers is a vehicle for building creative confidence in those who might not feel very creative in what they do.

The process of design thinking — illustrated in the CENSE model of innovation development below — fits with Sawyer’s assertion of how creativity unfolds.

The design and innovation cycle

The CENSE design and innovation corkscrew model

The role of craft

What Robinson, Sawyer, the Kelley brothers and others have done is dispelled the myths that creativity is some otherworldly trait and shown that its something for all of us. What can get lost in the blind adoption of this way of thinking is attention to craft.

Craft is the technical skill of applying creativity to a problem or task and that is something that is quite varied. The debate over whether or not the term designer belongs to everyone who applies creativity to solving problems or those with formal design training largely is one of craft.

Craft is the thing that brings wisdom from experience and technical skill in transforming creative ideas into quality products, not just interesting ones.

In our efforts to free people from the shackles of their education and a social world that told them they weren’t creative we’ve put aside discussion of craft in the hopes that we simply get people moving and creating. That is so very important to unlocking creative confidence and ensuring that our efforts to develop social innovations are truly social and engage the widest possible numbers of participants. However, it will be craft that ensures these solutions don’t turn into what George Carlin referred to as (great) ideas that suck. 

Building design practice in the everyday

The habits of creativity are just that, habits. And if design is the way of applying creativity to problems then building a design practice is key. This means bringing elements of design into the way you operate your enterprise. Spend a lot of time finding the right problems is a start (as discussed in a previous post). Discover, inquire and be curious. Visualize, prototype, create small ‘safe-fail’ experiments, and ensure that there is a learning mechanism through the evaluation to allow your enterprise to adapt.

This is all easier said than done. It can be easy to be satisfied with being creative, but to be excellent involves craft and that requires something beyond creativity alone. It may involve training (formal or otherwise), it most certainly involves mindful attention to the work (which is what underlies the ‘10,000 hour rule’ of practice that make someone an expert), but it also requires skill. Many will find their creative talents in art, management, leadership, or service, but not all will be remarkable in exercising that skill.

To put it another way; it’s like a muscle. Everyone can work their muscles and develop them with training, nutrition, rest, and attention, but some will respond to this differently for a variety of reasons due to how all of those activities come together. This is what helps contribute to reasons why someone might be better adept at long-distance running, while others are good at bulking up and still others are far more flexible on the yoga mat.

We are all creative. We are all designers, too. But not all of us are stellar designers for all things and its important to build our collective design literacy, which includes knowing when and how to cultivate, hire and retail craftspeople and not just assume we can design think our way through everything. This last point is what will ensure that design thinking doesn’t fade away as a fad after it “didn’t produce results” because people have confused creativity with craft, myth with muscle.

References: 

Kelley, T., & Kelley, D. (2013). Creative confidence: unleashing the creative potential within us all. New York, N.Y.: Crown Publishers.

Sawyer, R. K. (2012). Explaining creativity: The science of human innovation (2nd Edition.). Oxford, U.K.: Oxford University Press.

Sawyer, R. K. (2013). Zig Zag: The Surprising Path to Greater Creativity. San Francisco, CA: Jossey-Bass.

art & designdesign thinkingfood systemssocial systemssystems thinking

“If You Build It..”: A Reflection on A Social Innovation Story

If You Build it is documentary about a social innovation project aimed at cultivating design skills with youth to tackle education and social issues in a economically challenged community in North Carolina. The well-intentioned, well-developed story is not unfamiliar to those interested in social innovation, but while inspiring to some these stories mask bigger questions about the viability, opportunity and underlying systems issues that factor into the true impact of these initiatives. (Note: Spoiler alert > this essay will discuss the film and plotlines, yet hopefully won’t dissuade you from seeing a good film). 

Last week I had the opportunity to see Patrick Creadon‘s terrific new documentary “If You Build It” at the Bloor Hot Docs Cinema in Toronto as part of the monthly Doc Soup screening series. It was a great night of film, discussion and popcorn that inspired more than just commentary about the film, but the larger story of social innovation that the film contributes to.

If You Build It is the story of the Project H studio that was developed in in Bertie County (click here for a film outline) and run for two years by Emily Pilloton and her partner Matthew Miller. To learn more about the start of the story and the philosophy behind Project H, Emily’s TED talk is worth the watch:

It’s largely a good-news kind of story of how design can make a difference to the lives of young people and potentially do good for a community at the same time. While it made for a great doc and some inspiring moments, the film prompted thoughts about what goes on beyond the narrative posed by the characters, the community and those seeking to innovate through design and education.

Going beyond the story

Stories are often so enjoyable because they can be told in multiple ways with emphasis placed on different aspects of the plot, the characters and the outcomes. For this reason, they are both engaging and limiting as tools for making decisions and assessing impact of social interventions. It’s why ‘success stories’ are problematic when left on their own.

One of the notable points that was raised in the film is that the cost of the program was $150,000 (US), which was down from the original budget of $230,000 because Emily and Matthew (and later on, a close friend who helped out in the final few months) all didn’t take a salary. This was funded off of grants. Three trained designers and builders worked to teach students, build a farmers market, and administer the program for no cost at all.

The film mentions that the main characters — Matthew and Emily — live off credit, savings and grants (presumably additional ones?) to live off of. While this level of commitment to the idea of the Bertie County project is admirable, it’s also not a model that many can follow. Without knowing anything about their family support, savings or debt levels, the idea of coming out of school and working for free for two years is out of reach of most young, qualified designers of any discipline. It also — as we see in the film — not helpful to the larger cause as it allows Bertie County yo abdicates responsibility for the project and lessens their sense of ownership over the outcomes.

One segment of If You Build It looks back on Matthew’s earlier efforts to apply what he learned at school to provide a home for a family in Detroit, free of charge in 2007. Matthew built it himself and gave it to a family with the sole condition that they pay the utilities and electricity bills, which amounted to less than this family was paying in just rent at the time. That part of the story ends when Matthew returns to the home a few years later to find the entire inside gutted and deserted long after having to evict that original family 9 months after they took possession when they failed to pay even a single bill as agreed.

From Bertie County to Detroit and back

The Detroit housing experience is a sad story and there is a lot of context we don’t get in the film, but two lessons taught from that experience are repeated  in the story in Bertie County. In both cases, we see something offered that wasn’t necessarily asked for, with no up-front commitment of investment and the influence that the larger system has on the outcomes.

In Detroit, a family was offered a house, yet they were transplanted into a neighbourhood that is (like many in Detroit) sparsely populated, depressed, and without much infrastructure to enable a family to make the house a home easily. Detroit is still largely a city devoted to the automobile and there are wide swaths of the city where there is one usable home on every three or four lots. It’s hard to conceive of that as a neighbourhood. Images like the one taken below are still common in many parts of the city even though it is going through a notable re-energizing shift.

Rebirth of Detroit

Rebirth of Detroit

In the case of Bertie County, the same pattern repeats in a different form. The school district gets an entire program for free even to the point of refusing to pay for salaries for the staff (Emily and Matthew) over two years, after the initial year ended with the building of a brand-new farmers market pavillion that was fully funded by Project H and its grants.

The hypothesis ventured by Patrick Creadon when he spoke to the Doc Soup audience in Toronto was that there was some resentment at the project (having been initiated by a change-pushing school superintendent who was let go at the film’s start and was the one who brought Emily and Matthew to the community) and by some entrenched beliefs about education and the way things were done in that community.

Systems + Change = Systems Change (?)

There is a remarkably romantic view of how change happens in social systems. Bertie County received a great deal without providing much in the way of support. While the Studio H project had some community cheerleaders like the mayor and a few citizens, it appeared from the film that the community – and school board — was largely disengaged from the activities at Studio H. This invokes memories of Hart’s Ladder of Participation, (PDF) which is applied to youth, but works for communities, too. When there is a failure to truly collaborate, the ownership of the problem and solution are not shared.

At no time in the film do you get a sense of a shared ownership of the problem and solution between Studio H, the school board, and the community. While the ideas were rooted in design research, the community wasn’t invested — literally — in solving their problems (through design, at least). It represents a falsehood of design research that says you can understand a community’s needs and address it successfully through simple observation, interviews and data gathering.

Real, deep research is hard. It requires understanding not just the manifestations of the system, but the system itself.

Systems Iceberg

Systems Iceberg

Very often that kind of analysis is missing from these kinds of stories, which make for great film and books, but not for long-term success.

In a complex system, meaning and knowledge is gained through interactions, thus we need stories and data that reflect what kind of interactions take place and under what conditions. Looking at the systems iceberg model above, the tendency is to focus on the events (the Studio H’s), but often we need to look at the structures beneath.

To be sure, there is a lot to learn from Studio H now and from the story presented in If You Build It. The lesson is in the prototyping: Emily and Matthew provide a prototype that shows us it can be done and what kind of things we can learn from it. The mistake is trying to replicate Studio H as it is represented in the film, rather than seeing it as a prototype.

In the post-event Q & A with the audience, a well-intentioned gentleman working with school-building in Afghanistan asked Patrick Creadon how or whether he could get Emily and Matthew to come there and help (with pay) and Creadon rightly answered that there are Emilys and Matthews all over the place and that they are worth connecting to.

Creadon is half right. There are talented, enthusiastic people out there who can learn from the experience of the Studio H team, but probably far fewer who have the means to assume the risk that Emily and Matthew did. Those are the small details that separate out a good story from a sustainable, systemic intervention that really innovates in a way that changes the system. But its a start.

If You Build It is in theatres across North America.

complexitydesign thinkingemergenceevaluationsystems science

Developmental Evaluation and Design

Creation for Reproduction

Creation for Reproduction

 

Innovation is about channeling new ideas into useful products and services, which is really about design. Thus, if developmental evaluation is about innovation, then it is also fundamental that those engaging in such work — on both evaluator and program ends — understand design. In this final post in this first series of Developmental Evaluation and.., we look at how design and design thinking fits with developmental evaluation and what the implications are for programs seeking to innovate.  

Design is a field of practice that encompasses professional domains, design thinking, and critical design approaches altogether. It is a big field, a creative one, but also a space where there is much richness in thinking, methods and tools that can aid program evaluators and program operators.

Defining design

In their excellent article on designing for emergence (PDF), OCAD University’s Greg Van Alstyne and Bob Logan introduce a definition they set out to be the shortest, most concise one they could envision:

Design is creation for reproduction

It may also be the best (among many — see Making CENSE blog for others) because it speaks to what design does, is intended to do and where it came from all at the same time. A quick historical look at design finds that the term didn’t really exist until the industrial revolution. It was not until we could produce things and replicate them on a wide scale that design actually mattered. Prior to that what we had was simply referred to as craft. One did not supplant the other, however as societies transformed through migration, technology development and adoption, shifted political and economic systems that increased collective actions and participation, we saw things — products, services, and ideas — primed for replication and distribution and thus, designed.

The products, services and ideas that succeeded tended to be better designed for such replication in that they struck a chord with an audience who wanted to further share and distribute that said object. (This is not to say that all things replicated are of high quality or ethical value, just that they find the right purchase with an audience and were better designed for provoking that).

In a complex system, emergence is the force that provokes the kind of replication that we see in Van Alstyne and Logan’s definition of design. With emergence, new patterns emerge from activity that coalesces around attractors and this is what produces novelty and new information for innovation.

A developmental evaluator is someone who creates mechanisms to capture data and channel it to program staff / clients who can then make sense of it and thus either choose to take actions that stabilize that new pattern of activity in whatever manner possible, amplify it or — if it is not helpful — make adjustments to dampen it.

But how do we do this if we are not designing?

Developmental evaluation as design

A quote from Nobel Laureate Herbert Simon is apt when considering why the term design is appropriate for developmental evaluation:

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

Developmental evaluation is about modification, adaptation and evolution in innovation (poetically speaking) using data as a provocation and guide for programs. One of the key features that makes developmental evaluation (DE) different from other forms of evaluation is the heavy emphasis on use of evaluation findings. No use, no DE.

But further, what separates DE from ulitization-focused evaluation (PDF) is that the use of evaluation data is intended to foster development of the program, not just use. I’ve written about this in explaining what development looks like in other posts. No development, no DE.

Returning to Herb Simon’s quote we see that the goal of DE is to provoke some discussion of development and thus, change, so it could be argued that, at least at some level, DE it is about design. That is a tepid assertion. A more bold one is that design is actually integral to development and thus, developmental design is what we ought to be striving for through our DE work. Developmental design is not only about evaluative thinking, but design thinking as well. It brings together the spirit of experimentation working within complexity, the feedback systems of evaluation, with a design sensibility around how to sensemake, pay attention to, and transform that information into a new product evolution (innovation).

This sounds great, but if you don’t think about design then you’re not thinking about innovating and that means you’re really developing your program.

Ways of thinking about design and innovation

There are numerous examples of design processes and steps. A full coverage of all of this is beyond the scope of a single post and will be expounded on in future posts here and on the Making CENSE blog for tools. However, one approach to design (thinking) is highlighted below and is part of the constellation of approaches that we use at CENSE Research + Design:

The design and innovation cycle

The design and innovation cycle

Much of this process has been examined in the previous posts in this series, however it is worth looking at this again.

Herbert Simon wrote about design as a problem forming (finding), framing and solving activity (PDF). Other authors like IDEO’s Tim Brown and the Kelley brothers, have written about design further (for more references check out CENSEMaking’s library section), but essentially the three domains proposed by Simon hold up as ways to think about design at a very basic level.

What design does is make the process of stabilizing, amplifying or dampening the emergence of new information in an intentional manner. Without a sense of purpose — a mindful attention to process as well — and a sensemaking process put in place by DE it is difficult to know what is advantageous or not. Within the realm of complexity we run the risk of amplifying and dampening the wrong things…or ignoring them altogether. This has immense consequences as even staying still in a complex system is moving: change happens whether we want it or not.

The above diagram places evaluation near the end of the corkscrew process, however that is a bit misleading. It implies that DE-related activities come at the end. What is being argued here is that if the place isn’t set for this to happen at the beginning by asking the big questions at the beginning — the problem finding, forming and framing — then the efforts to ‘solve’ them are unlikely to succeed.

Without the means to understand how new information feeds into design of the program, we end up serving data to programs that know little about what to do with it and one of the dangers in complexity is having too much information that we cannot make sense of. In complex scenarios we want to find simplicity where we can, not add more complexity.

To do this and to foster change is to be a designer. We need to consider the program/product/service user, the purpose, the vision, the resources and the processes that are in place within the systems we are working to create and re-create the very thing we are evaluating while we are evaluating it. In that entire chain we see the reason why developmental evaluators might also want to put on their black turtlenecks and become designers as well.

No, designers don't all look like this.

No, designers don’t all look like this.

 

Photo Blueprint by Will Scullen used under Creative Commons License

Design and Innovation Process model by CENSE Research + Design

Lower image used under license from iStockphoto.

complexitydesign thinkingemergenceevaluationsystems thinking

Developmental Evaluation and Sensemaking

Sensing Patterns

Sensing Patterns, Seeing Pathways

Developmental evaluation is only as good as the sense that can be made from the data that is received. To assume that program staff and evaluators know how to do this might be one of the reasons developmental evaluations end up as something less than they promise. 

Developmental Evaluation (DE) is becoming a popular subject in the evaluation world. As we see greater recognition of complexity as a factor in program planning and operations and what it means for evaluations it is safe to assume that developmental evaluation will continue to attract interest from program staff and evaluation professionals alike.

Yet, developmental evaluation is as much a mindset as it is a toolset and skillset; all of which are needed to do it well. In this third in a series of posts on developmental evaluation we look at the concept of sensemaking and its role in understanding program data in a DE context.

The architecture of signals and sense

Sensemaking and developmental evaluation involve creating an architecture for knowledge,  framing the space for emergence and learning (boundary specification), extracting the shapes and patterns of what lies within that space, and then working to understand the meaning behind those patterns and their significance for the program under investigation. A developmental evaluation with a sensemaking component creates a plan for how to look at a program and learn from what kind of data is generated in light of what has been done and what is to be done next.

Patterns may be knowledge, behaviour, attitudes, policies, physical structures, organizational structures, networks, financial incentives or regulations. These are the kinds of activities that are likely to create or serve as attractors within a complex system.

To illustrate, architecture can be both a literal and figurative term. In a five-year evaluation and study of scientific collaboration at the University of British Columbia’s Life Sciences Institute, my colleagues Tim Huerta, Alison Buchan and Sharon Mortimer and I explored many of these multidimensional aspects of the program* / institution and published our findings in the American Journal of Evaluation and Research Evaluation journals. We looked a spatial configurations by doing proximity measurements that connected where people work to whom they work with and what they generated. Research has indicated that physical proximity makes a difference to collaboration (E.g.,: Kraut et al., 2002). There is relatively little concrete evaluation on the role of space in collaboration, mostly just inferences from network studies (which we also conducted). Few have actually gone into the physical areas and measured distance and people’s locations.

Why mention this? Because from a sense-making perspective those signals provided by the building itself had an enormous impact on the psychology of the collaborations, even if it was only a minor influence on the productivity. The architecture of the networks themselves was also a key variable that went beyond simple exchanges of information, but without seeing collaborations as networks it is possible that we would have never understood why certain activities produced outcomes and others did not.

The same thing exists with cognitive architecture: it is the spatial organization of thoughts, ideas, and social constructions. Organizational charts, culture, policies, and regulations all share in the creation of the cognitive architecture of a program.

Signals and noise

The key is to determine what kind of signals to pay attention to at the beginning. And as mentioned in a previous post, design and design thinking is a good precursor and adjunct to an evaluation process (and, as I’ve argued before and will elaborate on, is integral to effective developmental evaluation). Patterns could be in almost anything and made up of physical, psychological, social and ‘atmospheric’ (org and societal environmental) data.

This might sound a bit esoteric, but by viewing these different domains through an eye of curiousity, we can see patterns that permit evaluators to measure, monitor, observe and otherwise record to use as substance for programs to make decisions based on. This can be qualitative, quantitative, mixed-methods, archival and document-based or some combination. Complex programs are highly context-sensitive, so the sense-making process must include diverse stakeholders that reflect the very conditions in which the data is collected. Thus, if we are involving front-line worker data, then they need to be involved.

The manner in which this is done can be more or less participatory and involved depending on resources, constraints, values and so forth, but there needs to be some perspective taking from these diverse agents to truly know what to pay attention to and determine what is a signal and what is noise. Indeed, it is through this exchange of diverse perspectives that this can be ascertained. For example, a front line worker with a systems perspective may see a pattern in data that is unintelligible to a high-level manager if given the opportunity to look at it. That is what sensemaking can look like in the context of developmental evaluation.

“What does that even mean?” 

Sensemaking is essentially the meaning that people give to an experience. Evidence is a part of the sensemaking process, although the manner in which it is used is consistent with a realist approach to science, not a positivist one. Context is critical in the making of sense and the decisions used to act on information gathered from the evaluation. The specific details of the sensemaking process and its key methods are beyond the depth of this post, some key sources and scholars on this topic are listed below. Like developmental evaluation itself, sensemaking is an organic process that brings an element of design, design thinking, strategy and data analytics together in one space. It brings together analysis and synthesis.

From a DE perspective, sensemaking is about understanding what signals and patterns mean within the context of the program and its goals. Even if a program’s goals are broad, there must be some sense of what the program’s purpose is and thus, strategy is a key ingredient to the process of making sense of data. If there is no clearly articulated purpose for the program or a sense of its direction then sensemaking is not going to be a fruitful exercise. Thus, it is nearly impossible to disentangle sensemaking from strategy.

Understanding the system in which the strategy and ideas are to take place — framing — is also critical. An appropriate frame for the program means setting bounds for the system, connecting that to values, goals, desires and hypotheses about outcomes, and the current program context and resources.

Practical sensemaking takes place on a time scale that is appropriate to the complexity of information that sits before the participants in the process. If a sensemaking initiative is done with a complex program that has a rich history and many players involved that history, it is likely that multiple interactions and engagements with participants will be needed to undertake such a process. In part, because the sensemaking process is about surfacing assumptions, revisiting the stated objectives of the program, exploring data in light of those assumptions and goals, and then synthesizing it all to be able to create some means of guiding future action. In some ways, this is about using hindsight and present sight to generate foresight.

Sensemaking is not just about meaning-making, but also a key step in change making for future activities. Sensemaking realizes one of the key aspects of complex systems: that meaning is made in the interactions between things and less about the things themselves.

Building the plane while flying it

In some cases the sense made from data and experience can only be made in the moment. Developmental evaluation has been called “real time” evaluation by some to reflect the notion that evaluation data is made sense of as the program unfolds. To draw on a metaphor illustrated in the video below, sensemaking in developmental evaluation is somewhat like building the plane while flying it.

Like developmental evaluation as a whole, sensemaking isn’t a “one-off” event, rather it is an ongoing process that requires attention throughout the life-cycle of the evaluation. As the evaluator and evaluation team build capacity for sensemaking, the process gets easier and less involved each time its done as the program builds its connection both to its past and present context. However, such connections are tenuous without a larger focus on building in mindfulness to the program — whether organization or network — to ensure that reflections and attention is paid to the activities on an ongoing basis consistent with strategy, complexity and the evaluation itself.

We will look at the role of mindfulness in an upcoming post. Stay tuned.

* The Life Sciences Institute represented a highly complicated program evaluation because it was simultaneously bounded as a physical building, a corporal institution within a larger institution, and a set of collaborative structures that were further complicated by having investigator-led initiatives combined with institutional-level ones where individual investigators were both independent and collaborative. Taken together it was what was considered to be a ‘program’.
References & Further Reading:

Dervin, B. (1983). An overview of sense-making research: Concepts, methods and results to date. International Communication Association Meeting, 1–13.

Klein, G., & Moon, B. (2006). Making sense of sensemaking 1: Alternative perspectives. Intelligent Systems. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1667957

Klein, G., Moon, B., & Hoffman, R. R. (2006). Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent Systems, 21(4),

Kolko, J. (2010a). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis. In Proceedings of the 2010 Design Research Society Montreal Conference on Design & Complexity. Montreal, QC.

Kolko, J. (2010b). Sensemaking and Framing : A Theoretical Reflection on Perspective in Design Synthesis Understanding Sensemaking and the Role of Perspective in Framing Jon Kolko » Interaction design and design synthesis . In 2010 Design Research Society (DRS) international conference: Design & Complexity (pp. 6–11). Montreal, QC.

Kraut, R., Fussell, S., Brennan, S., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. Distributed work, 137–162. Retrieved from NCI.

Mills, J. H., Thurlow, A., & Mills, A. J. (2010). Making sense of sensemaking: the critical sensemaking approach. Qualitative Research in Organizations and Management An International Journal, 5(2), 182–195.

Rowe, A., & Hogarth, A. (2005). Use of complex adaptive systems metaphor to achieve professional and organizational change. Journal of advanced nursing, 51(4), 396–405.

Norman, C. D., Huerta, T. R., Mortimer, S., Best, A., & Buchan, A. (2011). Evaluating discovery in complex systems. American Journal of Evaluation32(1), 70–84.

Weick, K. E. (1995). The Nature of Sensemaking. In Sensemaking in Organizations (pp. 1–62). Sage Publications.

Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the Process of Sensemaking. Organization Science, 16(4), 409–421.

Photo by the author of Happy Space: NORA, An interactive Study Model at the Museum of Finnish Architecture, Helsinki, FI.

complexitydesign thinkingevaluationinnovationsystems thinking

Developmental Evaluation and Complexity

Stitch of Complexity

Stitch of Complexity

Developmental evaluation is an approach (much like design thinking) to program assessment and valuation in domains of high complexity, change, and innovation. These three terms are used often, but poorly understood in real terms for evaluators to make much use of. This first in a series looks at the term complexity and what it means in the context of developmental evaluation. 

Science writer and professor Neil Johnson  is quoted as saying: “even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples…” and that his definition of a science of complexity (PDF) is:  “the study of the phenomena which emerge from a collection of interacting objects.”  The title of his book Two’s Company, Three’s Complexity hints at what complexity can mean to anyone who’s tried to make plans with more than one other person.

The Oxford English Dictionary defines complexity as:

complexity |kəmˈpleksitē|

noun (pl. complexities)

the state or quality of being intricate or complicated: an issue of great complexity.

• (usu. complexities) a factor involved in a complicated process or situation: the complexities of family life.

For social programs, complexity involves multiple overlapping sources of input and outputs that interact with systems in dynamic ways at multiple time scales and organizational levels in ways that are highly context-dependent. Thats a mouthful.

Developmental evaluation is intended to be an approach that takes complexity into account, however that also means that evaluators and the program designers that they work with need to understand some basics about complexity. To that end, here are some key concepts to start that journey.

Key complexity concepts

Complexity science is a big and complicated domain within systems thinking that brings together elements of system dynamics, organizational behaviour, network science, information theory, and computational modeling (among others).  Although complexity has many facets, there are some key concepts that are of particular relevance to program designers and evaluators, which will be introduced with discussion on what they mean for evaluation.

Non-linearity: The most central start point for complexity is that it is about non-linearity. That means prediction and control is often not possible, perhaps harmful, or at least not useful as ideas for understanding programs operating in complex environments. Further complicating things is that within the overall non-linear environment there exist linear components. It doesn’t mean that evaluators can’t use any traditional means of understanding programs, instead it means that they need to consider what parts of the program are amenable to linear means of intervention and understanding within the complex milieu. This means surrendering the notion of ongoing improvement and embracing development as an idea. Michael Quinn Patton has written about this distinction very well in his terrific book on developmental evaluation. Development is about adaptation to produce advantageous effects for the existing conditions, improvement is about tweaking the same model to produce the same effects across conditions that are assumed to be stable.

Feedback: Complex systems are dynamic and that dynamism is created in part from feedback. Feedback is essentially information that comes from the systems’ history and present actions that shape the immediate and longer-term future actions. An action leads to an effect which is sensed, made sense of, which leads to possible adjustments that shape future actions. For evaluators, we need to know what feedback mechanisms are in place, how they might operate, and what (if any) sensemaking rubrics, methods and processes are used with this feedback to understand what role it has in shaping decisions and actions about a program. This is important because it helps track the non-linear connections between causes and effects allowing the evaluator to understand what might emerge from particular activities.

Emergence: What comes from feedback in a complex system are new patterns of behaviour and activity. Due to the ongoing, changing intensity, quantity and quality of information generated by the system variables, the feedback may look different each time an evaluator looks at it. What comes from this differential feedback can be new patterns of behaviour that are dependent on the variability in the information and this is called emergence. Evaluation designs need to be in place that enable the evaluator to see emergent patterns form, which means setting up data systems that have the appropriate sensitivity. This means knowing the programs, the environments they are operating in, and doing advanced ‘ground-work’ preparing for the evaluation by consulting program stakeholders, the literature and doing preliminary observational research. It requires evaluators to know — or at least have some idea — of what the differences are that make a difference. That means knowing first what patterns exist, detecting what changes in those patterns, and understanding if those changes are meaningful.

Adaptation: With these new patterns and sensemaking processes in place, programs will consciously or unconsciously adapt to the changes created through the system. If a program itself is operating in an environment where complexity is part of the social, demographic, or economic environment even a stable, consistently run program will require adaptation to simply stay in the same place because the environment is moving. This means sufficiently detailed record-keeping is needed — whether through program documents, reflective practice notes, meeting minutes, observations etc.. — to monitor what current practice is, link it with the decisions made using the feedback, emergent conditions and sensemaking from the previous stages and then tracking what happens next.

Attractors: Not all of the things that emerge are useful and not all feedback is supportive of advancing a program’s goals. Attractors are patterns of activity that generate emergent behaviours and ‘attract’ resources — attention, time, funding — in a program. Developmental evaluators and their program clients seek to find attractors that are beneficial to the organization and amplify those to ensure sustained or possibly greater benefit. Negative (unhelpful) attractors do the opposite and thus knowing when those form it enables program staff to dampen their effect by adapting activities to adjust and shift these activities.

Self-organization and Co-evolution: Tied with all of this is the concepts of self-organization and co-evolution. The previous concepts all come together to create systems that self-organize around these attractors. Complex systems do not allow us to control and predict behaviour, but we can direct actions, shape the system to some degree, and anticipate possible outcomes. Co-evolution is a bit of a misnomer in that it refers to the principles that organisms (and organizations) operating in complex environments are mutually affected by each other. This mutual influence might be different for each interaction, differently effecting each organization/organism as well, but it points to the notion that we do not exist in a vacuum. For evaluators, this means paying attention to the system(s) that the organization is operating in. Whereas with normative, positivist science we aim to reduce ‘noise’ and control for variation, in complex systems we can’t do this. Network research, system mapping tools like causal loop diagrams and system dynamics models, gigamapping, or simple environmental scans can all contribute to the evaluation to enable the developmental evaluator to know what forces might be influencing the program.

Ways of thinking about complexity

One of the most notable challenges for developmental evaluators and those seeking to employ developmental evaluation is the systems thinking about complexity. It means accepting non-linearity as a key principle in viewing a program and its context. It also means that context must be accounted for in the evaluation design. Simplistic assertions about methodological approaches (“I’m a qualitative evaluator / I’m a quantitative evaluator“) will not work. Complex programs require attention to the macro level contexts and moment-by-moment activities simultaneously and at the very least demand mixed method approaches to their understanding.

Although much of the science of complexity is based on highly mathematical, quantitative science, it’s practice as a means of understanding programs is quantitative and qualitative and synthetic. It requires attention to context and the nuances that qualitative methods can reveal and the macro-level understanding that quantitative data can produce from many interactions.

It also means getting away from language about program improvements towards one of development and that might be the hardest part of the entire process. Development requires adaptation to the program, thought and rethinking about the program’s resources and processes that integrate feedback into an ongoing set of adjustments that perpetuate through the life cycle of the program. This requires a different kind of attention,  methods, and commitment from both a program and its evaluators.

In the coming posts I’ll look at how this attention gets realized in designing and redesigning the program as we move into developmental design.