Category: evaluation

complexitydesign thinkingevaluationsystems sciencesystems thinking

The Complexity of Planning and Design in Social Innovation

The Architecture of Complex Plans

Planning works well for linear systems, but often runs into difficulty when we encounter complexity. How do we make use of plans without putting too much faith in their anticipated outcome and still design for change and can developmental design and developmental evaluation be a solution? 

It’s that time of year when most people are starting to feel the first pushback to their New Year’s Resolutions. That strict budget, the workout plan, the make-time-for-old-friends commitments are most likely encountering their first test. Part of the reasons is that most of us plan for linear activities, yet in reality most of these activities are complex and non-linear.

A couple interesting quotes about planning for complex environments:

No battle plan survives contact with the enemy – Colin Powell

In preparing for battle I have always found that plans are useless, but planning is indispensable – Dwight D. Eisenhower

Combat might be the quintessential complex system and both Gens Powell and Eisenhower knew about how to plan for it and what kind of limits planning had, yet it didn’t dissuade them from planning, acting and reacting. In war, the end result is what matters not whether the plan for battle went as outlined (although the costs and actions taken are not without scrutiny or concern). In human services, there is a disproportionate amount of concern about ‘getting it right’ and holding ourselves to account for how we got to our destination relative what happens at the destination itself.

Planning presents myriad challenges for those dealing with complex environments. Most of us, when we plan, expect things to go according to what we’ve set up. We develop programs to fit with this plan, set up evaluation models to assess the impact of this plan, and envisage entire strategies to support the delivery and full realization of this plan into action. For those working in social innovation, what is often realized falls short of what was outlined, which inevitably causes problems with funders and sponsors who expect a certain outcome.

Part of the problem is the mindset that shapes the planning process in the first place. Planning is designed largely around the cognitive rational approach to decision making (PDF), which is based on reductionist science and philosophy. Like the image above, a plan is often seen as a blueprint for laying out how a program or service is to unfold over time. Such models of outlining a strategy is quite suitable for building a physical structure like an office where everything from the materials to the machines used to put them together can be counted, measured and bound. This is much less relevant for services that involve interactions between autonomous agents who’s actions have influence on the outcome of that service and that result might vary from context to context as a consequence.

For evaluators, this is problematic because it reduces the control (and increases variance and ‘noise’) into models that are designed to reveal specific outcomes using particular tools. For program implementers, it is troublesome because rigid planning can drive actions away from where people are and for them into activities that might not be contextually appropriate due to some change in the system.

For this reason the twin concepts of developmental evaluation and developmental design require some attention. Developmental evaluation is a complexity-oriented approach to feedback generation and strategic learning that is intended for programs where there is a high degree of novelty and innovation. Programs where the evidence is low or non-existent, the context is shifting, and there are numerable strong and diverse influences are those where developmental evaluations are not only appropriate, but perhaps one of the only viable models of data collection and monitoring available.

Developmental design is a concept I’ve been working on as a reference to the need to incorporate ongoing design and re-design into programs even after they have been initially launched. Thus, a program evolves over time drawing in information from feedback gained through processes like evaluation to tweak its components to meet changing circumstances and needs. Rather than have a static program, a developmental design is one that systematically incorporates design thinking into the evolutionary fabric of the activities and decision making involved.

Both developmental design and evaluation work together to provide data required to allow program planners to constantly adapt their offerings to meet changing conditions, thus avoiding the problem of having outcomes becoming decoupled from program activities and working with complexity rather than against it. For example, developmental evaluation can determine what are the key attractors shaping program activities while developmental design can work with those attractors to amplify them or dampen them depending on the level of beneficial coherence they offer a program. In two joined processes we can acknowledge complexity while creating more realistic and responsive plans.

Such approaches to design and evaluation are not without contention to traditional practitioners, leaving questions about the integrity of the finished product (for design) and the robustness of the evaluation methods, but without alternative models that take complexity into account, we are simply left with bad planning instead of making it like Eisenhower wanted it to be: indispensable .

evaluationknowledge translation

A Call to Evaluation Bloggers: Building A Better KT System

Time To Get Online...

Are you an evaluator and do you blog? If so, the American Evaluation Association wants to hear from you. This CENSEMaking post features an appeal to those who evaluate, blog and want to share their tips and tricks for helping create a better, stronger KT system. 

Build a better moustrap and the world will beat a path to your door — Attributed to Ralph Waldo Emerson

Knowledge translation in 2011 is a lot different than it was before we had social media, the Internet and direct-to-consumer publishing tools. We now have the opportunity to communicate directly to an audience and share our insights in ways that go beyond just technical reports and peer-reviewed publications, but closer to sharing our tacit knowledge. Blogs have become a powerful medium for doing this.

I’ve been blogging for a couple of years and quite enjoy it. As an evaluator, designer, researcher and health promoter I find it allows me to take different ideas and explore them in ways that more established media do not. I don’t need to have the idea perfect, or fully formed, or relevant to a narrow audience. I don’t need to worry about what my peers think or my editor, because I serve as the peer review, editor and publisher all at the same time.

I originally started blogging to share ideas with students and colleagues — just small things about the strange blend of topics I engage in that many don’t know about or understand or wanted to know more of. Concepts like complexity, design thinking, developmental evaluation, and health promotion can get kind of fuzzy or opaque for those outside of those various fields.

Blogs enable us to reach directly to an audience and provide a means of adaptive feedback on ideas that are novel. Using the comments, visit statistics, and direct messages sent to me from readers, I can gain some sense of what ideas are being taken up with people and which one’s resonate. That enables me to tailor my messages and amplify those parts that are of greater utility to a reader, thus increasing the likelihood that a message will be taken up. For CENSEMaking, the purpose is more self-motivated writing rather than trying to assess the “best” messages for the audience, however I have a series of other blogs that I use for projects as a KT tool. These are, in many cases, secured and by invitation only to the project team and stakeholders, but still look and feel like any normal blog.

WordPress (this site) and Posterous are my favorite blogging platforms.

As a KT tool, blogs are becoming more widely used. Sites like Research Blogging are large aggregations of blogs on research topics. Others, like this one, are designed for certain audiences and topics — even KT itself, like the KTExchange from the Research Into Action Action initiative at the University of Texas and MobilizeThis! from the Research Impact Knowledge Mobilization group at York University.

The American Evaluation Association has an interesting blog initiative led by AEA’s Executive Director Susan Kistler called AEA365, which is a tip-a-day blog for evaluators looking to learn more about who and what is happening in their field. A couple of years ago I contributed a post on using information technology and evaluation and was delighted at the response it received. So it reaches people. It’s for this reason that AEA is calling out to evaluation bloggers to contribute to the AEA365 blog with recommendations and examples for how blogging can be used for communications and KT. AEA365 aims to create small-bite pockets of information that are easily digestible by its audience.

If you are interested in contributing, the template for the blog is below, with my upcoming contribution to the AEA365 blog posted below that.

By embracing social media and the power to share ideas directly (and done so responsibly), we have a chance to come closer to realizing the KT dream of putting more effective, useful knowledge into the hands of those that can use it faster and engage those who are most interested and able to use that information more efficiently and humanely.

Interested in submitting a post to the AEA365 blog? Contact the AEA365 curators at aea365@eval.org.

Template for aea365 Blogger Posts (see below for an example)

[Introduce yourself by name, where you work, and the name of your blog]

Rad Resource – [your blog name here]: [describe your blog, explain its focus including the extent to which it is related to evaluation, and tell about how often new content is posted]

Hot Tips – favorite posts: [identify 3-5 posts that you believe highlighting your blogging, giving a direct link and a bit of detail for each (see example)]

  • [post 1]
  • [post 2]
  • Etc.

Lessons Learned – why I blog: [explain why you blog – what you find useful about it and the purpose for your blog and blogging. In particular, are you trying to inform stakeholders or clients? Get new clients? Provide a public service? Help students?]

Lessons Learned: [share at least one thing you have learned about blogging since you started]

Remember – stay under 450 words total please!

My potential contribution (with a title I just made up): Cameron Norman on Making Sense of Complexity, Design, Systems and Evaluation: CENSEMaking

Rad Resource – [CENSEMaking]: CENSEMaking is a play on the name of my research and design studio consultancy and on the concept of sensemaking, something evaluators help with all the time. CENSEMaking focuses on the interplay of systems and design thinking, health promotion and evaluation and weaves together ideas I find in current social issues, reflections on my practice as well as the evidence used to inform it. I aspire to post on CENSEMaking 2-3 times per week, although because it is done in a short-essay format, find the time can be a challenge.

Hot Tips – favorite posts:

  • What is Developmental Evaluation? This post came from a meeting of working group with Michael Quinn Patton and was fun to write because the original exercise that led to the content (described in the post) was so fun to do. It also provided an answer to a question I get asked all the time.
  • Visualizing Evaluation and Feedback. I believe that the better we can visualize complexity the more feedback we provide, the greater the opportunities we have for engaging others, and more evaluations will be utilized. This post was designed to provoke thinking about visualization and illustrate how its been creatively used to present complex data in interesting and accessible ways. My colleague and CENSE partner Andrea Yip has tried to do this with a visually oriented blog on health promoting design, which provides some other creative examples of ways to make ideas more appealing and data feel simpler.
  • Developmental Design and Human Services. Creating this post has sparked an entire line of inquiry for me on bridging DE and design that has since become a major focus for my work. This post became the first step in a larger journey.

Lessons Learned – why I blog: CENSEMaking originally served as an informal means of sharing my practice reflections with students and colleagues, but has since grown to serve as a tool for knowledge translation to a broader professional and lay audience. I aim to bridge the sometimes foggy world that things like evaluation inhabit  — particularly developmental evaluation – and the lived world of people whom evaluation serves.

Lessons Learned: Blogging is a fun way to explore your own thinking about evaluation and make friends along the way. I never expected to meet so many interesting people because they reached out after reading a blog post of mine or made a link to something I wrote. This has also led me to learn about so many other great bloggers, too. Give a little, get a lot in return and don’t try and make it perfect. Make it fun and authentic and that will do.

___

** Photo by digitalrob70 used under Creative Commons License from Flickr

complexityevaluationinnovationsystems thinking

What is Developmental Evaluation?

Developmental evaluation (DE) is a problematic concept because it deals with a complex set of conditions and potential outcomes that differ from and challenge the orthodoxy in much of mainstream research and evaluation and makes it difficult to communicate. At a recent gathering of DE practitioners in Toronto, we were charged with coming up with an elevator pitch to describe DE to someone who wasn’t familiar with it; this is what I came up with. 

Developmental evaluation is an approach to understanding the activities of a program operating in dynamic, novel environments with complex interactions. It  focuses on innovation and strategic learning rather than standard outcomes and is as much a way of thinking about programs-in-context and the feedback they produce. The concept is an extension of Michael Quinn Patton’s original concept of Utilization Focused Evaluation with concepts gleaned from complexity science to account for the dynamism and novelty. While Utilization Focused Evaluation has a series of steps to follow (PDF), Developmental Evaluation is less prescriptive, which is both its strength and its challenge for describing it to people (things I’ve discussed in earlier posts).

So with that in mind, our group was charged with coming up with a way to explain DE to someone who is not familiar with it using anything we’d like — song, poetry, dance, slides, stories and beyond. While my colleague Dan chose to lead us all in song, I opted to go with a simple analogy by comparing DE to a hybrid of Trip Advisor and the classic Road Trip (due to lack of good vocalizing skills).

Trip Advisor has emerged as one of the most popular tools for travellers seeking advice on everything from hotel rooms to airlines to resorts and all the destinations along the way. Trip Advisor is averaging more than 13 million unique visitors per month and, unlike its competitors, focuses on user-generated content to support its service. Thus, your fellow travellers are the source of the recommendations not some professional travel agent or journalist. At its heart are stories of varies tones, detail and quality. People upload various accounts of their stay, chronicling even the most minute detail through photos, links to their blogs, video, and narrative. If you want to get the inside details on what a hotel is really like, check Trip Advisor and you’ll likely find it.

However, like any self-organizing set of ideas, the quality of the content will vary along with the level of reportage and the conclusions will be different depending on the context and experience of the person doing the reporting. For example, if you are a North American who is used to having even the most basic hotel chain offer a room with full-service linens, a bathroom, closet, desk and separate shower, you’ll have a hard time adjusting to something like EasyHotel in Europe.

The Road Trip part (capitalization intended here to denote something different than a regular trip by road), denotes the experience that comes from a journey with a desired destination, but not a pre-determined route and only a generalized timeline. A Road Trip is something that is more than just traveling from Point A to Point B, which is usually accomplished by taking the shortest route, the fastest route or a combination of the two; rather it is a journey. Movies like National Lampoon’s Vacation (and, European Vacation), Thelma and Louise, Harold & Kumar Go to White Castle, and (surprise!) Road Trip all capture this spirit to some effect. I suppose one might even find a more grim example of a Road Trip in the Lord of the Rings Trilogy or The Road.

Road Trips have a long history and are not just a North American phenomenon as this article from the Indian Newspaper, The Hindu reports in some detail:

“Road trips are fun when they are not planned point-to-point. As long as you have accommodation booked, that is enough. Its better not to have agendas; get as spontaneous and adventurous as you can. My friends and I went on a road trip to Goa last year. It was loads of fun as it was the first time we took off on our own without parents. To me, it was more than just a trip with friends. It showed that I could take care of myself and that I was now a grown-up, free to do what I wanted,” says Siddharth, who is doing his engineering.

The idea of spontaneity and adventure are part of the process, not an unexpected problem to be solved like in a traditional evaluation. Indeed, some of these unplanned and unusual departures are not only part of the learning, but essential to it. It is akin to what Thor Muller describes as planned serendipity; you might not know what is going to come, but it is possible to set the conditions up to increase the likelihood of and preparedness for moments of discovery and learning. This is like setting out on a journey with a mindset of developmental and strategic learning to fit with what Louis Pasteur stated about discovery:
Chance favours the prepared mind
Thus, as Developmental Evaluators and program implementation leaders we are creating conditions to learn en route to a general destination, but without a clear path and an open mind towards what might unfold. This attention to the emergence of new patterns and then the sensemaking to understand what these new patterns mean in the context to which they emerged and the goals, directions and resources that surround the discovery is a important facet of what separates Developmental Evaluation from other forms of evaluation and research.
So in describing DE to others, I proposed combining these two ideas of Trip Advisor and the Road Trip to create: Road Trip Advisor.

Road Trip Advisor for Developmental Evaluation

Road Trip Advisor would involve going on a journey that has a general destination, but with no single path to it. Along the way, the Developmental Evaluator would work with those taking the journey with him — likely the program staff, stakeholders and others interested in strategic learning and feedback — and systematically capture the decision points to take a particular path, the process that unfolded in making decisions, the outcomes or events connected to those decisions inasmuch as one can draw such linkages, and then continually dialogue with the program team about what she or he or they are seeing, sensing and experiencing. This includes what innovations are being produced.
Returning to the article on road tripping from The Hindu:
“Road-tripping is a great way to bond with the people you are travelling with and I would strongly recommend it to people. It not only makes you appreciate yourself as an individual but is an amazing experience as you get to meet new people, know different cultures and sample different cuisines. I can never forget biking on sleet, riding though torrential rains, gobbling hot rotis at dhabas, the beautiful snow-capped mountains and guy talk with friends on the trip,” says Dheeraj, who recently went to Ladakh.
Here the focus is on relationships, learning new things and taking that learning onward. That is what DE is all about. My colleague Remi illustrated this in our meeting by having us all spread out throughout the room and go through a pantomime-type skit where he collected information from each participant about where the wisdom was and then bringing this person along for the journey. So as he started out alone as the Developmental Evaluator, he wound up at the destination of wisdom with everyone.
Road Trip Advisor requires documenting the journey along the way, sharing what you learn with others, and continuing learning and revisiting your notes — while checking out what notes others have (including use of evidence from other projects and academic research) — and integrating that together on an ongoing basis.
But as my other colleagues pointed out in their presentations, the journey isn’t always about feeling good. Sometimes there are challenges as the Hindu article adds:

all is not hunky dory during these trips. You have to be way about accidents and mishaps. And, realise that freedom comes with responsibility. Says Arjun: “I had borrowed my friend’s bike for the trip, and though it looked good, it gave problems on the foothills of Kodaikanal and we couldn’t do the climb. Being a weekend, there were no mechanics. It helps to know your machine. A passion for road-tripping is not enough. You need to be equipped to take care of yourself also.”

Here, the story parallel is about being prepared. Know evaluation methods, know how to build and sustain relationships and to deal with conflict. A high tolerance for ambiguity and the flexibility to adapt is also important. Knowing a little about systems thinking and complexity doesn’t hurt either. Developmental evaluation is not healthy for those who need a high degree of predictability, are not flexible in their approach, and adhere to rigid timelines. Complex systems collapse under rigid boundary conditions and do evaluators working with such restrictions in developmental contexts.

So why do people do it? “Well, my memories of my favourite road trip were an injured leg, chocolates, beautiful photographs and a great sense of fulfilment,” recalls Arjun.

It is youngsters like these who have transformed road-tripping from just a hobby to an art.

After all, friendship and travel is a potent combination that you can’t say no to.

 In DE, the “youngsters” are everyone. But as we (my DE colleagues) all pointed out: DE is fun. It is fun because we learn and grow and challenge ourselves and the programs that we are working with. It’s collaborative, instructive, and promotes a level of connection between people, programs and ideas that other methods of evaluation and learning are less effective at. DE is not for everyone or every program and Micheal Quinn Patton has pointed this out repeatedly. But for those programs where innovation, strategic learning and collaboration count, it is pretty good way to journey from where you are to where you want to go.
art & designevaluation

Visualizing Evaluation and Feedback

Seeing Ourselves in the Mirror of Feedback

Evaluation data is not always made accessible and part of the reason is that it doesn’t accurately reflect the world that people see. To be more effective at making decisions based on data, creating the mirrors that allow us to visualize things in ways that reflect what programs see may be key. 

Program evaluation is all about feedback and generating the kind of data that can provide actionable instruction to improve, sustain or jettison program activities. It helps determine whether a program is doing what it claims to be doing, what kind of processes are underway within the context of the program, and what is generally “going on” when people engage with a particular activity. Whether a program actually chooses to use the data is another matter, but at least it is there for people to consider.

A utlization-focused approach to evaluation centres on making data actionable and features a set of core activities (PDF) that help boost the likelihood that data will actually be used. Checklists such as the one referenced from IDRC do a great job of showing the complicated array of activities that go into making useful, user-centred, actionable evaluation plans and data. It isn’t as simple as expressing intent to use evaluations, much more needs to go into the data in the first place, but also into the readiness of the organization in using the data.

What the method of UFE and the related research on its application does not do is provide explicit,  prescriptive methods for data collection and presentation. If it did, data visualization ought to be considered front and centre in the discussion.

Why?

If the data is complex, the ability for us to process the information generated from an evaluation might be limited if we are expecting to connect disparate concepts. David McCandless has made a career of taking very large, complex topics and finding ways to visualize results to provide meaningful narratives that people can engage with. His TED talk and books provide examples of how to use graphic design and data analytics to develop new visual stories through data that transcend the typical regression model or pie chart.

There is also a bias we have towards telling people things, rather than allowing them to discover things for themselves. Robert Butler makes the case for the “Colombo” approach to inviting people to discover the truth in data in the latest issue of the Economist’s Intelligent Life. He writes:

What we need to do is abandon the “information deficit” model. That’s the one that goes: I know something, you don’t know it, once you know what I know you will grasp the seriousness of the situation and change your behaviour accordingly. Greens should dump that model in favour of suggesting details that actually catch people’s interest and allow the other person to get involved.

Art — or at least visual data — is a means of doing this. By inviting conversation about data — much like art does — we invite participation, analysis and engagement with the material that not only makes it more meaningful, but also more likely to be used. It is hard to look at some of the visualizations at places like

At the very least, evaluators might want to consider ways to visualize data simply to improve the efficiency of their communications. To that end, consider Hans Rosling’s remarkably popular video produced by the BBC showing the income and health distributions of 200 countries over 200 years in four minutes. Try that with a series of graphs.

 

evaluationinnovationresearch

Developmental Thinking and Evaluation

Think developmentally before evaluating developmentally

Developmental evaluation is difficult to initiate, largely because the thinking behind it is so foreign to normal program planning and reporting. It appears that developmental thinking needs to be in place before one can hope to implement a DE project successfully.

Over the next few days I will be meeting with colleagues working with the Social Innovation Generation Group, Michael Quinn Patton  and others who share an interest and wrestle with developmental evaluation (DE) in practice.

Over the course of the last year we have been meeting monthly to discuss our experiences, challenges and learning on the issue of developmental evaluation. Although our group members come from diverse fields — government, academia, non-profit and others — and are focused on projects that range in scope, we all share one common experience: frustration with implementing DE.

Reading through a case study the other night I couldn’t help be see something I’d seen before: the principal barrier to the implementation of DE is that the program, its partners, or the stakeholders associated with the program didn’t individually or collectively function in a manner that supported DE. Whether they actually bought into DE in the first place is also not known, but it seems to me that the two are related.

Developmental thinking about social issues has shown itself in my work to be a linchpin for any progress on developmental evaluation. Commiserating with colleagues in this area, it seems evident to me that assessing for DE is a critical step in the pre-work that needs to come before any evaluation takes place. Without developmental thinking, developmental actions and evaluation is hard to reasonably achieve.

If you do not see your program as one that evolves, but rather just gets bigger, better, stronger, weaker etc.., having real-time evaluation tools will be less useful or perhaps even harmful in the absence of a thinking framework to make sense of the data. Real-time, consultative evaluation, and its utilization-focused actions makes DE stand apart from other approaches to evaluation, even if the methods and tools are similar.

The implications for this assertion in practice are enormous. It means that a DE practitioner cannot be just an evaluator or at least must find others that can work with a program to educate, inspire and contemplate collaboratively about developmental thinking and what it means for a program. It also brings the evaluation function far closer to program planning than evaluators (and program planners) might be used to. It also means holding a willingness to think different, not just implement different thinking. To that end, knowledge of motivation and some sense of how one provokes or creates space for change is also important.

Taken together, we have ourselves a real challenge. The “core competencies” for DE already include qualities like people skills, knowledge of complexity, and communication skills (in addition to fundamental skills in evaluation methods and process implementation), but now we are adding additional ones. Systems thinking, behaviour change, program planning and design are all reasonable skills that would assist an evaluator in doing this work. Nice in theory, but how about in practice? Can we reasonably expect that there are enough people out there with these skills to do it well? Or is this a call for more of a team-science (or rather, team evaluation) approach to evaluation?

 

 

 

 

complexitydesign thinkingevaluationinnovationsystems thinking

The Momentum Problem in Developmental Design and Evaluation

Setting a pace and keeping speed

Developmental projects evolve at a pace that suits them, but what happens when the speed and pattern of this process collide with the other projects in life? 

The concept of developmental evaluation and developmental design resonate with a lot of people working in social innovation, public health, and international programming. The reason is that, despite the wealth of planning frameworks available and the logic that is embedded within them, the world doesn’t really work according to plan.

As Colin Powell once said (paraphrasing another famous military leader):

No battle plan survives contact with the enemy

While we may accept this as common and expected among our programs, it doesn’t make adapting to these circumstances any easier. And, true to complexity, the more elements added to this mix, the more unpredictable and non-linear things get.

For developmental evaluators, this non-linearity and complexity is part of the job, but when you’re working on multiple projects, that job becomes more challenging to do. When you are a program manager responsible for budgets, ensuring that you have the right staff, accounting for the delays and system dynamics associated with your program delivery is an enormous undertaking and can by itself shape the program that is actually delivered. One can’t justify keeping a staff member on to wait for things to happen; most of the time that person is found other things to do in the interim. However, those “other things” lead to a fragmented attention on what is going on with the program.

Multiply this by manyfold, and you have a truly complex problem affecting a complex program.

What does this mean for a developmental design and evaluation? My motivation for writing this post is to solicit ideas and stories about this problem set and explore some potential solutions. While I personally struggle to maintain the focus and momentum on projects that have extended lags, unpredictable or spontaneous patterns of activity,  I know that many of those lags are partly affected by those running the programs having other things on their plate. Its a compounding problem. One person experiences a delay, occupies time with other things that take them away from the project, which create further delays with other elements of the program and so on.

From a design standpoint, this is less problematic. These delays can spur creative reflection and action towards generating a product if the time away from action is used for such mindful attending to ideas.

For developmental evaluation, this is slightly more problematic as the event-process-effect links that we seek to connect together become harder to disentangle. Non-linearity doesn’t mean that there is no such thing as cause and effect. It is just that there are consequences arising from events that are nearly impossible to trace back to a single “cause” (which may not exist), but nonetheless, something does happen that sparks other things. The more one can attend to such things, the better quality the evaluation.

Yet, I argue that the very complexity of the programs require more not less attention when doing evaluation lest we become simple storytellers. We offer more than that. But to do that well requires a sustained level of attention to the dynamics and what we might call paying attention to the silences to glean lessons from non-action that might have significant impact on our programs. This also requires not “filling the time” when things are quiet, but remaining active. Anyone who practices mindfulness meditation knows that non-doing requires a lot of work!

This sounds nice, but how practical is it? And how do we set benchmarks of sort to evaluate the silences and justify such active work in times of quiet? Or do we simply ride momentum like others and hope that we can pick things up when the momentum is high?

Photo Speed of Sound by Ana Patricia Almeida used under Creative Commons License from Flickr

behaviour changecomplexitydesign thinkingevaluationinnovation

Developmental Design and Human Services

The ever-changing landscape

One of the principal challenges for program evaluators and researchers is overcoming design limitations imposed by programs that fail to account for time and development. What might it look like if we took this path and what does it mean to engage in developmental design?

If you are like me, climate change scares you. I live in Canada, so in some ways (particularly these cold winter days), the thought of having the season be a little warmer has some appeal. But complex systems don’t work quite that way as there are intense costs projected with the privilege of having more days of the year wearing a light jacket or shorts than an overcoat or parka. What makes climate change an interesting example from the perspective of service programming, design and evaluation is that it provides a look at a real-world way to conceive of development over concepts like improvement and takes change in a whole new direction.

Humans are rather paradoxical creatures in that we are both attuned to moving forward (consider the design of the body: everything is oriented to one direction) and a perfect example of a developmental system.

A developmental system is one that evolves and adapts to changing inputs and transforms itself over time as more information is added to it (i.e., it is a complex adaptive system). From a programming standpoint, it means things don’t “get better” or “improve” per se — those are value judgements places by us — but rather, they build adaptive capacity.

Concepts like developmental evaluation, introduced and discussed in this space before, are ways to respond to this from an evaluation standpoint. DE provides a method of feedback generation that can enable programs to adapt and evolve by using the principles of complexity science with program evaluation methods to create a platform to detect and monitor emergent conditions and support innovation. And while there are some questions to ask of a program to see if it is suited to a developmental evaluation, we often forget to ask whether the program was designed to develop in the first place. What if we placed that at the centre of our discussion and started with development in mind?

My previous post looked at designing for time and space, but designing for development takes this one step further. Social media and technology-delivered program spaces provide an example of an environment where development is most obvious. Facebook was designed to expand and evolve, although one might challenge how well they’ve really developed. If you consider how effective, long-lasting software and services survive, they develop over time. In some cases, this development was designed into the process. Many open source software platforms are designed with this in mind — the Firefox browser and even Google’s Chrome are examples of tools that were built to be developed on. The originators designed  the basic foundation with the idea that they would evolve into something else.

This doesn’t happen very often with human services. There are few programs that are designed with development in mind. When it is acknowledged that things will change, it is done so reluctantly. Program in this context refers to any organized effort to change behaviour and produce products for human need. In public health, the further irony is that programs aimed at changing behaviour — whether it is supporting healthy eating, smoking cessation, mental health promotion or others — are often designed with rigid controls built in. We develop manuals, create ‘best practice guidelines’, amass evidence and create toolkits that can be applied to any circumstance, without attention to context or adaptation.

Indeed, when you relax these controls, many get concerned.

Having conducted a few social media trainings and presentations over the years, the most consistent question I am asked by those in public health is: how do I control the message. The answer is: you don’t. This can lead to questions about evaluation, which gets into problems of research design and trusting the findings, because research typically applies rigid controls for quality assurance.

With social media, what can be done is to use a process of developmental design by engaging with the audience/client/public in an authentic manner with the explicit thought that the program that launches today will not be the one that people engage with in a year, or a month or sooner. Support this evolution through developmental evaluation (which I would include as a part of the developmental design process) and you’ll have a feedback mechanism that encourages shifts over time.

Developmental design takes into account the complexity of the environment in which a product or service takes place and enlists a continued process of engagement with stakeholders over time — a true relationship (which is why social media can serve as a good example). Rather than take a static design brief, a living design brief would be used and constantly revisited and tweaked over time. Paying attention to changes in the brief over time would also enable program developers to detect weak signals that could precede large shifts in behaviour and potentially support strategic foresight and planning. Developmental design, as I’ve conceived of here, is attuned to complexity and innovation in human systems and designs for it and adapts with it, rather than assuming the opposite.

Applying developmental design may get us past the inevitable square-peg-round-hole problem that many evaluators, program planners and policy makers find themselves in as they seek to get greater value from their programs and demand more return on their investments. Evaluation and research is sought as the means to do it and with programs designed for evolution from the start, perhaps we won’t be surprised when the metaphorical ice sheets start to fall apart (as seen above) and see it as a developmental step to a new reality.

** Photo Nature Antarctica 17 by Christian Revival Network used under Creative Commons Licence.