Category: evaluation

complexityevaluationresearchsystems thinking

Systems Thinking, Logic Models and Evaluation

San Antonio at Night, by Corey Leopold (CC License)

The American Evaluation Association conference is on right now in San Antonio and with hundreds of sessions spread over four days it is hard to focus on just one thing. For those interested in systems approaches to evaluation, the conference has had a wealth of learning opportunities.

The highlight was a session on systems approaches to understanding one of evaluation’s staples: the program logic model.

The speakers, Patricia Rogers from the Royal Melbourne Institute of Technology, and consultants Richard Hummelbrunner and Bob Williams spoke to the challenges posed with the traditional forms of logic models by looking at the concepts of beauty, truth and justice. These model forms tend to take the shape of the box model (the approach most common in North America), the outcome hierarchy model, and the logic framework, which is popular in international development work.

The latter model was the focus of Hummelbrunner’s talk, which critiqued the ‘log frame’ approach and showed how its highly structured approach to conceptualizing programs tends to lead to a preoccupation with the wrong things and a rigidity in the way programs are approached. They work well in environments that are linear, straightforward, and in situations where funders need simple, rapid overviews of programs. But as Hummelbrunner says:

Logframes fail in messy environments

The reason is often that people make assumptions of simplicity when really such programs are complicated or complex. Patricia Rogers illustrated ways of conceptualizing programs using the traditional box models, but showing how different program outcomes could emerge from one program, or that there may be the need to have multiple programs working simultaneously to achieve a particular outcome.

What Rogers emphasized was the need for logic models to have a sense of beauty to it.

Logic models need to be beautiful, to energize people. It’s can’t just be the equivalent of a wiring diagram for a program.

According to Rogers, the process of developing a logic model is most effective when it maintains harmony between the program and the people within it. Too often such model development processes are dispiriting events rather than exciting ones.

Bob Williams concluded the session by furthering the discussion of beauty, truth and justice, by expanding the definitions of these terms within the context of logic models. Beauty is the essence of relationships, which is what logic models show. Truth is about providing opportunities for multiple perspectives on a program. And a boundary critique is a an opportunity for ethical decision making.

On that last point, Williams made some important arguments about how, in systems related research and evaluation, the act of choosing a boundary is a profound ethical decision. Who is in, who is out, what counts and what does not are all critical questions to the issue of justice.

To conclude, Williams also challenged us to look at models in new ways, asking:

Why should models be the servant of data, rather than have data serve the models?

In this last point, Williams highlights the current debates within the knowledge management community, which is dealing with a decade where trillions of points of data have been generated to make policy and programming decisions, yet better decisions still elude us. Is more data, better?

The session was a wonderful puctuation to the day and really advanced the discussion on something so fundamental as logic models, yet took us to a new set of places by considering them as things of artful design, beauty, ethical decision making tools, and vehicles for exploring the truths that we live. Pretty profound stuff for a session on something seemingly benign as a planning tool.

The session ended with a great question from Bob Williams to the audience that speaks to why systems are also about the people within them and emplored evaluators to consider:

Why don’t we start with the people first instead of the intervention, rather than the other way around like we normally do?

evaluationsystems thinking

American Evaluation Association Conference

Over the next few days I’ll be attending the American Evaluation Association conference in San Antonio, Texas. The conference, the biggest gathering of evaluators in the world. Depending on the Internet connections, I will try to do some live tweeting from my @cdnorman and some blogging reflections along the way, so do follow along if you’re interested. In addition to presenting some of the work that I’ve been engaged in on team science with my colleagues at the University of British Columbia and Texas Tech University, I will be looking to connect more with those groups and individuals doing work on systems evaluation and developmental evaluation with an eye to spotting the trends and developments (no pun intended) in those fields.

Evaluation is an interesting area to be a part of. It has no disciplinary home, a set of common practices, but much diversity as well and brings together a fascinating blend of people from all walks of professional life.

Stay tuned.

complexityeducation & learningevaluationsocial systems

Developmental Evaluation And Accountability

Today I’ll be wrapping up a two-day kick off to an initiative aimed at building a community of practice around Developmental Evaluation (PDF), working closely with DE leader and chief proponent, Michael Quinn Patton. The initiative, founded by the Social Innovation Generation group, is designed in part to bring a cohort of learners (or fellows? — we don’t have a name for ourselves) together to explore the challenges and opportunities inherent in Developmental Evaluation as practiced in the world.

In our introductions yesterday I was struck by how much DE clashes with accountability in the minds of many funders and evaluation consumers. The concept strikes me as strange given that DE is idea for providing the close narrative study of programs as they evolve and innovate that clearly demonstrates what the program is doing (although, due to the complex nature of the phenomenon, it may not be able to fully explain it). But as we each shared our experiences and programs, it became clear that, tied to accountability, is an absence of understanding of complexity and the ways it manifests itself in social programs and problems.

Our challenge over the next year together will be how to address these and other issues in our practice.

What surprises me is that, while DE is seen as not rigorous by some, there is such strong adherence to other methods that might be rigorous, but completely inappropriate for the problem, yet that is considered OK. It is as if doing the wrong thing well is better than doing something that is a little different.

This is strange stuff. But that’s why we keep learning it and telling others about it so that they might learn too.

education & learningevaluationinnovationresearchscience & technology

Openness and The Problem With Collaboration

Openness & Collaboration

Collaboration is everywhere. It’s fast becoming one of the highest virtues to strive for in media, health sciences, business. Whether it is crowdsourcing, groundswells, public engagement, participatory research, or e-democracy, collaboration is hot.

Why? One of the main reasons has to do with the mere fact that we are facing an increasing array of complex problems that have multiple sources, where no one person/group has the all the answers, and where large-scale social action is required if there is any hope of addressing them. The proposed solution is collaboration.

Collaboration is defined as:

collaboration |kəˌlabəˈrā sh ən|
noun
1 the action of working with someone to produce or create something : he wrote on art and architecture in collaboration with John Betjeman.
• something produced or created in this way : his recent opera was a collaboration with Lessing.
2 traitorous cooperation with an enemy : he faces charges of collaboration.
DERIVATIVES
collaborationist |-nist| noun & adjective (sense 2).
ORIGIN mid 19th cent.: from Latin collaboratio(n-), from collaborare ‘work together.’

At the root of the term is (from the Latin): co-labour — working together. That sounds great in theory and indeed, if we are working in a social environment (physical or electronic) we are very likely collaborating in some manner. Social media for instance is built upon collaboration. The picture posted along with this blog was courtesy of psd on Flickr and used under a Creative Commons Licence (thank you!), which encourages collaboration and remixing. Knowledge translation is a another concept that has collaboration at its very foundation. It’s commonplace to see it and, in the world of academic heath sciences, it is considered to be an important part of the work we do.

On the surface of things, my colleagues and I collaborate a lot. But a second glance suggests that this might be overstating things — a lot. The reason has to do with collaboration’s precondition: openness.

Openness is defined (selectively) as:

open |ˈōpən|
adjective
1 allowing access, passage, or a view through an empty space; not closed or blocked up : it was a warm evening and the window was open | the door was wide open.
• free from obstructions : the pass is kept open all year by snowplows.

2 [ attrib. ] exposed to the air or to view; not covered : an open fire burned in the grate.

3 [ predic. ] (of a store, place of entertainment, etc.) officially admitting customers or visitors; available for business : the store stays open until 9 p.m.

4 (of a person) frank and communicative; not given to deception or concealment : she was open and naive | I was quite open about my views.
• not concealed; manifest : his eyes showed open admiration.

Let’s consider these definitions for a moment within the context of health and social services, the area I’m most familiar with.

Allowing access refers to having the ability to gain entry to something — physical or otherwise. That might be simple if collaboration is with members of the same team — but what about when you have people from other teams? Other disciplines? Having worked on a project that focuses on interdisciplinary collaboration between teams of researchers I can vouch that it is not something to be taken for granted. Developing a collaborative approach to research, particularly in teams, is something that takes a long time to foster. Then there is confidentiality, rules and regulations about whom has access to what. Even in teams that are open to true collaboration, sometimes the rules that govern institutions don’t allow researchers to engage across settings to access data.

Having something “not blocked up” sounds good, but anyone looking for collaboration knows that there are a lot of preconceived ideas about what that means in practice. For example, are certain people expected to get credit even if they don’t offer anything substantive ? There are conventions for authorship that often grant those who lead the lab a prime authorship position with little attention to the amount of effort on a paper.

What about being “exposed to the air or to view; not covered”? This could mean open to new ideas or ways of working. Sure, it sounds nice to say that you’re open to ideas and suggestions, but what about real practice? Resistance to new ideas is how innovation is thwarted, but it also protects interests within an organization and with individuals. As the saying goes:

The only people who welcome change are wet babies

Lastly, frank and communicative action is a part of openness and if there is anything that represents the converse of that it is academic publishing. It probably should strike people as surprising how often scientists report positive results in the academic literature, but it doesn’t. Why? There is a well-known publication bias — whether real in terms of editorial bias or in terms of self-selection away from publishing negative trials. Another issue is that collaboration is hard, it’s not well funded (that is, the collaboration part — the science itself sometimes is), and it takes a long time to produce something of value. The reason is that it is based on normal human relationships and they don’t fit a timeline that’s particularly ‘efficient’.  It’s also hard to be frank when your reputation and funding is on the line.

So collaboration will continue to soar as an idea, yet until we acknowledge the challenges in an open, frank manner (as the term suggests) we are going to see a marginal benefit for science, health and innovation.

complexityevaluationresearchsocial systemssystems science

Developmental Evaluation: Problems and Opportunities with a Complex Concept

Everyone's Talking About Developmental Evaluation

“When it rains, it pours” so says the aphorism about how things tend to cluster. Albert Lazlo-Barabasi has found that pattern to be indicative of a larger complex phenomenon that he calls ‘bursts‘, something worth discussing in another post.

This week, that ‘thing’ seems to be developmental evaluation. I’ve had more conversations, emails and information nuggets placed in my consciousness this week than I have in a long time. It must be worth a post.

Developmental evaluation is a concept widely attributed to Michael Quinn Patton, a true leader in the field of evaluation and its influence on program development and planning. Patton first wrote about the concept in the early 1990’s, although the concept didn’t really take off until recently in parallel with the growing popularity of complexity science and systems thinking approaches to understanding health and human services.

At its root, Developmental Evaluation (DE) is about evaluating a program in ‘real time’ by looking at programs as evolving, complex adaptive systems operating in ecologies that share this same set of organizing principles. This means that there is no definitive manner to assess program impact in concrete terms, nor is any process that is documented through evaluation likely to reveal absolute truths about the manner in which a program will operate in the future or in another context. To traditional evaluators or scientists, this is pure folly, madness or both. When your business is coming up with the answer to a problem, any method that fails to give you ‘the’ answer is problematic.

But as American literary critic H.L. Mencken noted:

“There is always an easy solution to every human problem — neat, plausible and wrong”

Traditional evaluation methods work when problems are simple or even complicated, but rarely do they provide the insight necessary for programs with complex interactions. Most community-based social services fall into this realm as does much of the work done in public health, eHealth, and education. The reason is that there are few ways to standardize programs that are designed to adapt to changing contexts or operate in an environment where there is no stable benchmark to compare.

Public health operates well within the former situation. Disaster management, disease outbreaks, or wide-scale shifts in lifestyle patterns all produce contexts that shift — sometimes radically — so that the practice that works best today, might not be the one that works best tomorrow. We can see this problem demonstrated in the difficulty with ‘best practice’ models of public health and health promotion, which don’t really look like ‘best’ practices, but rather provide some examples of things that worked well in a complex environment. (It is for this reason that I don’t favour or use the term ‘best practice’ in public health, because I simply view too much of it as operating in the realm of the complex, which is something for which the term is not suited.)

eHealth provides an example of the latter. The idea that we can expect to develop, test and implement successful eHealth interventions and tools in a manner that fits with the normal research and evaluation cycle is impractical at best and dangerous at the worst. Three years ago Twitter didn’t exist except in the minds of a few thousand and now has a user population bigger than a large chunk of Europe. Geo-location services like Foursquare, Gowalla and Google Latitude are becoming popular and morphing so quickly that it is impossible to develop a clear standard to follow.

And that is OK, because that is the way things are, not the way evaluators want them to be.

DE seeks to bring some rigour, method and understanding to these problems by creating opportunities to learn from this constant change and use the science of systems to help make sense of what has happened, what is going on now, and to anticipate possible futures for a program. While it is impossible to fully predict what will happen in a complex system due to the myriad interacting variables, we can develop an understanding of a program in a manner that accounts for this complexity and creates useful means of understanding opportunities. This only really works if you embrace complexity rather than try and pretend that things are simple.

For example, evaluation in a complex system considers the program ecology as interactive, relationship-based (and often networked) and dynamic. Many of the traditional evaluation methods seek to understand programs as if they were static. That is, that the lessons of the past can predict the future. What isn’t mentioned, is that we evaluators can ‘game the system’ by developing strategies that can generate data that can fit well into a model, but if the questions are not suited to a dynamic context, the least important parts of the program will be highlighted and thus, the true impact of a program might be missed in the service of developing an acceptable evaluation. It is, what Russell Ackoff called: doing the wrong things righter.

DE also takes evaluation one step further and fits it with Patton’s Utlization-focused evaluation approach., which frames evaluation in a manner that focuses on actionable results. This approach to evaluation integrates the process of problem framing,data collection, analysis, interpretation and use together akin to the concept of knowledge integration. Knowledge integration is the process by which knowledge is generated and applied together, rather than independently, and reflects a systems-oriented approach for knowledge-to-action activities in health and other sciences, with an emphasis on communication.

So hopefully these conversations will continue and that DE will no longer be something that peaks on certain weeks, but rather infuses my colleagues conversations about evaluation and knowledge translation on a regular basis.

design thinkingeducation & learningevaluationinnovationresearch

Design Thinking or Design Thinking + Action?

 

There is a fine line between being genuinely creative, innovative and forward thinking and just being trendy.

The issue is not a trivial one because good ideas can get buried when they become trendy, not because they are no longer any good, but because the original meaning behind the term and its very integrity get warped by the influx of products that poorly adhere to the spirit, meaning and intent of the original concepts. This is no more evident than in the troika of concepts that fit at the centre of this blog: systems thinking, design thinking and knowledge translation. (eHealth seems to have lost some its lustre).

This issue was brought to light in a recent blog post by Tim Brown, CEO of the design and innovation firm IDEO. In the post, Brown responds to another post on the design blog Core77 by Kevin McCullagh that spoke to the need to re-think the concept of design thinking and whether it’s popularity has outstripped its usefulness. It is this popularity which is killing the true discipline of design by unleashing a wave of half-baked applications of design thinking on the world and passing it off as good practice.

There’s something odd going on when business and political leaders flatter design with potentially holding the key to such big and pressing problems, and the design community looks the other way.

McCullagh goes on to add that the term design thinking is growing out of favour with designers themselves:

Today, as business and governments start to take design thinking seriously, many designers and design experts are distancing themselves from the term.While I have often been dubbed a design thinker, and I’ve certainly dedicated my career to winning a more strategic role for design. But I was uncomfortable with the concept of design thinking from the outset. I was not the only member of the design community to have misgivings. The term was poorly defined, its proponents often implied that designers were merely unthinking doers, and it allowed smart talkers with little design talent to claim to represent the industry. Others worried about ‘overstretch’—the gap between design thinkers’ claims, and their knowledge, capabilities and ability to deliver on those promises.

This last point is worth noting and it speaks to the problem of ‘trendiness’. As the concept of design thinking has become commonplace, the rigor in which it was initially applied and the methods used to develop it seem to have been cast aside, or at least politely ignored, in favour of something more trendy so that everyone and anyone can be a design thinker. And whether this is a good thing or not is up for debate.

Tim Brown agrees, but only partially, adding:

I support much of what (McCullagh) has to say. Design thinking has to show impact if it is to be taken seriously. Designing is as much about doing as it is about thinking. Designers have much to learn from others who are more rigorous and analytical in their methodologies.

What I struggle with is the assertion that the economic downturn has taken the wind out of the sails of design thinking. My observation is just the opposite. I see organizations, corporate or otherwise, asking broader, more strategic, more interesting questions of designers than ever before. Whether as designers we are equipped to answer these questions may be another matter.

And here in lies the rub. Design thinking as a method of thinking has taken off, while design thinking methodologies (or rather, their study and evaluation) has languished. Yet, for design thinking to be effective in producing real change (as opposed to just new ways of thinking) its methods need to be either improved, or implemented better and evaluated. In short: design thinking must also include action.

I would surmise that it is up to designers, but also academic researchers to take on this challenge and create opportunities to develop design thinking as a disciplinary focus within applied research faculties. Places like the University of Toronto’s Rotman School of Business and the Ontario College of Art and Design’s Strategic Innovation Lab are places to start, but so should schools of public health, social work and education. Only when the methods improve and the research behind it will design thinking escape the “trendy” label and endure as a field of sustained innovation.

behaviour changecomplexityevaluationhealth promotionpsychology

When Change Potential is Embedded in Bigger Systems

 

Yesterday I was part of an examination committee for a student discussing issues of health promotion, policy change and advocacy for a population that has been widely viewed as marginalized. The challenge that this student was wrestling with was balancing issues of collective and individual empowerment and where the appropriate action needs to take place (and then determining how to evaluate the impact of such action). Drawing on the work of Isaac Prilleltensky and his brilliant work on empowerment theory, the student’s project hopes to foster change that fits somewhere between the individual and community. But how to evaluate the impact?

An empowerment approach, as conceived by Prilleltensky, involves both personal and societal shifts simultaneously to be most effective. If individuals are motivated to change, yet the system is not prepared to adapt to these changes, the value of empowerment is diminished and so is the effect on society. The question shifts to looking at a place to start or determining what the is chicken and what is the egg. This question is less useful than one that considers ways to understand the embedded nature of change agents and change itself within systems shaped both by structure and time.

Barack Obama was elected in a manner that greatly changed the way we look at politics. While he made enormous strides in shaping an electorate, his success at governing has been more muted. Obama’s potential to do well in governing is embedded in the policies and practices that came before him, whether he likes it or not. This is illustrated to full comic effect in a recent Ron Howard ‘Presidential Reunion’ short on Funny or Die. George W. Bush built his policy agenda in a manner that was positioned with Bill Clinton’s, which was positioned with George H.B. Bush’s and so on. Yes, there are some clear departures based on incidents of massive, abrupt change such as September 11th attacks which led to major reactive shifts in policy like the creation of the U.S. Patriot Act , creation of new governmental bodies, and the initiation of two wars abroad. But these are the extremes. A closer look at most non-revolutionary government shifts shows that policy evolves and gets tweaked, but rarely exhibits radical change from administration to administration. Even though the rhetoric around health care reform in the U.S. has spoken of ‘radical change’, the bottom line is that no matter what policy emerges, it will bear closer resemblance to what came before than it will differ.

The embedded structure of social systems is akin to Russian Matryoshka dolls. Our ability to change hinges upon where in stack of dolls we lay and how tightly those dolls are stuck together. I would argue that Obama’s electoral success had a lot to do with a system where the fit of the dolls was loose. There was a clear process to getting nominated (e.g., primaries), but the manner by which interest gets generated and people get out to vote was loose at the time of his campaign. Obama succeeded primarily because he got people to vote who had never come out before, the population that most had given up on trying to reach. In government, the fit is much tighter. Everything has a protocol, a history, and receives an intense scrutiny in that even the smallest shift is noticed, dissected and critiqued.

That leads to a lot of information and feedback, much of it contradictory. Hence, the inertia. With more information than ever at our disposal, the risk that this inertia will persist is high. Jaron Lanier, who I wrote about in my last post, migh attribute this to ‘lock in’ : the dominant way of doing things. Obama succeed because he found a new model of campaigning, captured nicely in three recent books (Harfoush / Plouffe / Sabato). We don’t yet have a new way of governing.

From an evaluation perspective it becomes critical that we understand both these structures and the fit between these variables, or the degree to which the dominant design or ‘lock in’ plays in mediating the impact of change if we are to understand the impact that our efforts to create change are having.

The student who just defended her comprehensive exam, her challenge in using health promotion to instill change will depend on how locked in our society is in its attitudes towards vulnerable populations and the fit between the individual and community with regards to empowerment. I hope, like Obama campaigning in 2008, that fit is loose.