Month: July 2014

education & learningresearchsystems thinking

The urban legends of learning (and other inconvenient truths)

Learning simulacrum, simulation or something else?

Learning simulacrum, simulation or something else?

Learning styles, technology-driven teaching, and self-direction are all concepts that anyone interested in education should be familiar with, yet the foundations for their adoption into the classroom, lab or boardroom are more suspect than you might think. Today we look at the three urban legends of learning and what that might mean for education, innovation and beyond. 

What kind of learner are you? Are you a visual learner perhaps, where you need information presented in a particular visual style to make sense of it? Maybe you need to problem-solve to learn because that’s the way you’ve been told is best for your education.

Perhaps you are a self-directed learner who is one that, when given the right encouragement and tools, will find your way through the muck to the answers and that others just need to get out of the way. With tools like the web and social media, you have the world’s knowledge at your disposal and have little need to be ‘taught’ that stuff, because its online.

And if you’re a digital native (PDF), this is all second nature to you because you’re able to use multiple technologies simultaneously to solve multiple problems together with ease if given the ability to do so. After all, you’ve had these tools your entire life.

A recent article by Paul Kirschner and Jeroen van Merriënboer published in the peer-reviewed journal Educational Psychologist challenges these ‘truths’ and many more, calling them urban legends:

An urban legend, urban myth, urban tale, or contemporary legend, is a form of modern folklore consisting of stories that may or may not have been believed by their tellers to be true.

The authors are quick to point out that there are differences in the way people approach material and prefer to learn, but they also illustrate that there is relatively little evidence to support much of the thinking that surrounds these practices, confusing learning preferences for learning outcomes. I’ve commented on this before, noting that too often learning is conflated with interest and enjoyment when they are different things and if we were really serious about it we might change the way we do a great deal many things in life.

In the paper, the authors debunk — or at least question — the evidence that supports the ‘legends’ of digital natives as a type of learner, the presence of specific learning styles and the need to customize learning to suit such styles of learning, and that of the lone self-educator. In each case, the authors present much evidence to challenge these ideas so as not to take them as truths, but hypotheses that have little support for them in practice.

Science and its inconvenient truths about learning

Science has a funny way of revealing truths that we may find uncomfortable or at least challenge our current orthodoxy.

This reminds me of a terrific quote from the movie Men in Black that illustrates the fragility of ideas in the presence and absence of evidence after one of the characters (played by Will Smith) uncovers that aliens were living on earth (in the film) and is consoled by his partner (played by Tommy Lee Jones) about what is known and unknown in the world:

Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you’ll know tomorrow.

One of the problems with learning is that there is a lot to learn and not all of it is the same in content, format and situational utility. Knowledge is not a ‘thing’ in the way that potatoes, shoes, patio furniture, orange juice, and pencils are things where you can have more or less of it and measure the increase, decrease and change in it over time. But we often treat it that way. Further, knowledge is also highly contextualized and combines elements that are stable, emergent, and transformative in new, complex arrangements simultaneously over time. It is a complex adaptive system.

Learning (in practice) resists simple truths.

It’s why we can be taught something over and again and not get it, while other things get picked up quickly within the same person even if the two ‘things’ seem alike. The conditions in which a person might learn are cultural (e.g., exposure to teaching styles at school, classroom designs, educational systems, availability and exposure to technology, life experiences, emphasis on reflective living/practice within society, time to reflect etc..) and psycho-social/biological (e.g., attention, intelligence, social proximity, literacy, cognitive capacity for information processing, ability to engage with others) so to reduce this complex phenomena to a series of statements about technology, preference and perception is highly problematic.

Science doesn’t have all the answers — far from it — but at least it can test out what is consistent and observable over time and build on that. In doing so, it exposes the responsibility we have as educators and learners.

With great power comes great responsibility…?

Underpinning the urban legends discussed by Kirschner and van Merriënboer and not discussed is the tendency for these legends to create a hands-off learning systems where workplaces, schools, and social systems are freed from the responsibility of shaping learning experiences and opportunities. It effectively reduces institutional knowledge, wisdom and experience to mere variables in a panoply of info-bites treated as all the same.

It also assumes that design doesn’t matter, which undermines the ability to create spaces and places that optimize learning options for people from diverse circumstances.

This mindset frees organizations from having to give time to learning, provide direction (i.e., do their own homework and set the conditions for effective learning and knowledge integration at the outset). It also frees us up from having to choose, to commit to certain ideas and theories, which means some form of discernment, priority setting, and strategy. That requires work up front and leadership and hard, critical, and time-consuming conversations about what is important, what we value in our work, and what we want to see.

When we assume everyone will just find their way we abdicate that responsibility.

Divesting resources and increasing distraction

In my home country of Canada, governments have been doing this with social investment for years where the federal government divests interest to the provinces who divest it to cities and towns who divest it to the public (and private) sector, which means our taxes never go up even if the demands on services do and we find that individual citizens are responsible for more of the process of generating collective benefit without the advantage of any scaled system to support resource allocation and deployment throughout society (which is why we have governments in the first place). It also means our services and supports — mostly — get smaller, lesser in quality, more spread thinly, and lose their impact because there isn’t the scaled allocation of resources to support them.

Learning is the same way. We divest our interests in it and before you know it, we learn less and do less with it because we haven’t the cultural capital, traditions or infrastructure to handle it. Universities turn campus life to an online experience. Secondary schools stop or reduce teaching physical education that involves actual physical activity.  Scholarly research is reduced to a Google search. Books are given up as learning vehicles because they take too long to read. It goes on.

It’s not that there are no advantages to some of these ideas in some bites, but that we are transforming the entire enterprise with next to no sense of the systems they are operating in, the mission they are to accomplish, a theory of change that is backed up by evidence, or the will to generate the evidence needed to advise and the resources to engage in the sensemaking needed to evaluate that evidence.

Science, systems and learning

It is time to start some serious conversations about systems, science and learning. It would help if we started getting serious about what we mean when we speak of learning, what theories we use to underpin that language and what evidence we have (or need) to understand what those theories mean in practice and for policy. This starts by asking better questions — and lots of them — about learning and its role in our lives and work.

Design thinking and systems thinking are two thinking tools that can help us find and frame these issues. Mindfulness and its ethics associated with non-judgement, open-mindedness, compassion and curiosity are also key tools. The less we judge, the more open we are to asking good questions about what we are seeing that can lead us to getting better answers rather than getting trapped by urban legends.

Doing this within a systems thinking frame also allows us to see how what we learn and where and how we learn is interconnected to better spot areas of leverage and problems in our assumptions.

This might allow us to make many of our urban legends obsolete instead of allowing them to grow like the alligators that live in the sewers of New York City. 

 

 

complexityeducation & learningevaluationsystems thinking

Developmental Evaluation: Questions and Qualities

Same thing, different colour or different thing?

Same thing, different colour or different thing?

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it’s popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

If you are operating in the social service, health promotion or innovation space it is quite possible that you’ve been hearing about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I ought to be excited at seeing DE brought up so often, yet I am often not. Why?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I fail to hear any of the indicator terms one would expect from such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

DE is following the well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) — whatever that is, it’s never elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

I hear the term DE being used more frequently in this cavalier manner that I suspect reflects aspiration rather than reality.

This aspiration is likely about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. DE has the potential to support all of this, but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to — and competent enough to know how to — adapt your evaluation design and your programs themselves in measured, appropriate ways.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the ‘social messes‘ that represent the programs under evaluation only to do what Russell Ackoff calls “the wrong things, righter” because they apply a standard, rigid method to a slippery, complex problem.

Maybe we need to build a better brand for DE.

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of ‘preciousness’ of a term? Who cares what we call it?

This matters because the programs that use and can benefit from DE matter. If its just gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it, then maybe its OK (actually, that’s not OK either — but let’s pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That ‘something else’ is often a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like ‘complexity’ and ‘innovation’ that have no real meaning. (When was the last time you heard someone openly question what someone meant by those terms?)

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported). There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked.

Yet, it’s like saying “I need a 3-D printer” when you’re looking to fix a pipe on your sink instead of a wrench, because that’s the latest tool innovation and wrenches are “last year’s” tool. It makes no sense. Yet, it’s done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovation, design and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

I’ve started seeing recent university graduates claiming to be developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I’m seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It’s as I’ve written before and that Winston Churchill once said:

I am always ready to learn, but I don’t always like being taught

Many programs are not even primed to learn, let alone being taught.

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

 

Photo credit: Highline Chairs by the author

innovationsocial innovation

The Finger Pointing to the Moon

SuperLuna

SuperLuna

In social innovation we are at risk of confusing our stories of success for real, genuine impact. Without theories, implementation science or evaluation we risk aspiring to travel to the moon, yet leaving our rockets stuck on the launchpad.  

There is a Buddhist expression that goes like this:

Be careful not to confuse the finger pointing to the moon for the moon itself. *

It’s a wonderful phrase that is playful and yet rich in many meanings. Among the most poignant of these meanings is related to the confusion between representation and reality, something we are starting to see exemplified in the world of social innovation and its related fields like design and systems thinking.

On July 13, 2014 the earth experienced a “supermoon” (captured in the above photograph), named because of its close passage to earth. While it may have seemed also close enough to touch, it was still a distance unfathomable to nearly everyone except a handful on this planet. There was a lot of fingers pointed to the moon that night.

While the moon has held fascination for humans for millennia, it’s also worth drawing our attention to the pointing fingers, too.

Pointing fingers

How often do you hear “we are doing amazing stuff“when hearing about leaders describe their social innovations in the community, universities, government, business or partnerships between them? Thankfully, it’s probably a lot more than ever because the world needs good, quality innovative thinking and action. Indeed, judging from the rhetoric at conferences and events and published literature in the academic literature and popular press it seems we are becoming more innovative all the time.

We are changing the world.

…Except, that is a largely useless statement on its own, even if well meaning.

Without documentation of what this “amazing stuff” looks like, a theory or logic explaining how those activities are connected to an outcome and an observed link between it all (i.e., evaluation) there really is no evidence that the world is changed – or at least changed in a manner that is better than had we done something else or nothing at all. That is the tricky part about working with complex systems, particularly large ones. How the world is changed is subtitle of the the book by Brenda Zimmerman, Frances Westley and Michael Quinn Patton on complexity and evaluation in social change, Getting to Maybe. It is because change requires theory, strategic implementation and evaluation that these three leaders in such topics came together to discuss what can be called social innovation. They introduce theory, strategy and evaluation ideas in the book and — while the book has remained a popular text — I rarely see them referred to in serious conversations about social innovation.

Unfortunately, concrete discussion of these three areas — theory, strategic implementation, and evaluation — is largely absent from the dialogue on social innovation. No more was this evident than in the social innovation week events held across Canada in May and June of this year as part of a series of gatherings between practitioners, researchers and policy makers from all kinds of different sectors and disciplines. The events brought together some of the leading thinkers, funders, institutes and social labs from around the world and was as close to the “social innovation olympics” as one could get. The stories told were inspirational, the diversity in the programming was wide, and the ideas shared were creative and interesting.

And yet, many of those I spoke to (including myself) were left with the question: What do I do with any of this? Without something specific to anchor to that question remained unanswered.

Lots of love, not enough (research) power

As often happens, these gatherings serve more as a rallying cry for those working in a sector — something that is quite important on its own as a critical support mechanism — but less about challenging ourselves. As Geoff Mulgan from Nesta noted in the closing keynote to the Social Frontiers event in Vancouver (and riffing off Adam Kahane’s notion of power and love as a vehicle for social transformation), the week featured a lot of love and not so much expression of power (as in critique).

Reflecting on the social innovation events I’ve attended, the books and articles I’ve read, and the conversations I’ve had in the first six months of 2014 it seems evident that the love is being felt by many, but that it is woefully under-powered (pun intended). The social innovation week events just clustered a lot of this conversation in one week, but it’s a sign of a larger trend that emphasizes storytelling independent of the kind of details that one might find at an academic event. Stories can inspire (love), but they rarely guide (power). Adam Kahane is right: we need both to be successful.

The good news is that we are doing love very well and that’s a great start. However, we need to start thinking about the power part of that equation.

There is a dearth of quality research in the field of social innovation and relatively little in the way of concrete theory or documented practice to guide anyone new to this area of work. Yes, there are many stories, but these offer little beyond inspiration to follow. It’s time to add some guidance and a space for critique to the larger narrative in which these stories are told.

Repeating patterns

What often comes from the Q & A sessions following a presentation of a social innovation initiative are the same answers as ‘lessons learned’:

  • Partnerships and trust are key
  • This is very hard work and its all very complex
  • Relationships are important
  • Get buy-in from stakeholders and bring people together to discuss the issues
  • It always takes longer than you think to do things
  • It’s hard to get and maintain resources

I can’t think of a single presentation over the past six months where these weren’t presented as  ‘take-home messages’.

Yet, none of these answers explain what was done in tangible terms, how well it was done, what alternatives exist (if any), what was the rationale for the program and any research/evidence/theory that underpins that logic, and what unintended consequences have emerged from these initiatives and what evaluated outcomes they had besides numbers of participants/events/dollars moved.

We cannot move forward beyond love if we don’t find some way to power-up our work.

Theories of change: The fingers and the moons

Perhaps the best place to start to remedy this problem of detail is developing a theory of change for social innovation**.

Indeed, the emergence of discourse on theory of change in worlds of social enterprise, innovation and services in recent years has been refreshing. A theory of change is pretty much what it sounds like: a set of interconnected propositions that link ideas to outcomes and the processes that exist between them all. A theory of change answers the question: Why should this idea/program/policy produce (specific) changes?

The strengths of the theory of change movement (as one might call it) is that it is inspiring social innovators to think critically about the logic in their programs at a human scale. More flexible than a program logic model and more detailed than a simple hypothesis, a theory of change can guide strategy and evaluation simultaneously and works well with other social innovation-friendly concepts like developmental evaluation and design.

The weaknesses in the movement is that many theories of change fail to consider what has already been developed. There is an enormous amount of conceptual and empirical work done on behaviour change theories at the individual, organization, community and systems level that can inform a theory of change. Disciplines such as psychology, sociology, political theory, geography and planning, business and organizational behaviour, evolutionary biology and others all have well-researched and developed theories to explain changes in activity. Too often, I see theories developed without knowledge or consideration of such established theories. This is not to say that one must rely on past work (particularly in the innovation space where examples might be few in number), but if a theory is solid and has evidence behind it then it is worth considering. Not all theories are created equal.

It is time for social innovation to start raising the bar for itself and the world it seeks to change. It is time to start advancing theories, strategic implementation and evaluation practice and research so that the social innovation events of the future foster real power for change and not just inspiration and love.

 

* one of the more cited translated versions of this phrase has been attributed to Thich Nhat Hanh who suggests the Buddha remarked: “just as a finger pointing at the moon is not the moon itself. A thinking person makes use of the finger to see the moon. A person who only looks at the finger and mistakes it for the moon will never see the real moon.”

** This actually means many theories of change. A theory of change is program-specific and might be identical to another program and built upon the same foundations as others, but just as a program logic model is unique to each program, so too is a theory of change.

Photo credit: SuperLuna with different filters by Paolo Francolini used under Creative Commons License via Flickr

eHealthinnovationpublic healthsocial innovationsocial media

Seeing the lights in research with our heads in the clouds

Lights in the clouds

Lights in the clouds

Some fields stagnate because they fail to take the bold steps into the unknown by taking chances and proposing new ideas because the research isn’t there to guide it while social innovation has a different twist on the problem: it has plenty of ideas, but little research to support those ideas. Unless the ideas and research match up it is unlikely that either area will develop.

 

Social innovation is a space that doesn’t lack for dreamers and big ideas. That is a refreshing change of pace from the world of public policy and public health that are well-populated by those who feel chained down to what’s been done as the entry to doing something new (which is oxymoronic when you think about it).

Fields like public health and medicine are well-served by looking to the evidence for guidance on many issues, but an over-reliance on using past-practice and known facts as the means to guide present action seriously limits the capacity to innovate in spaces where evidence doesn’t exist and may not be forthcoming.

The example of eHealth, social media and healthcare

A good example of this is in the area of eHealth. While social media has been part of the online communication landscape for nearly a decade (or longer, depending on your definition of the term), there has been sparse use of these tools and approaches within the health domain by professionals until recently. Even today, the presence of professional voices on health matters is small within the larger discourse on health and wellbeing online.

One big reason for this — and there are many — is that health systems are not prepared for the complexity that social media introduces.  Julia Belluz’s series on social media and healthcare at Macleans provides among the best examples of the gaps that social media exposes and widens within the overlapping domains of health, medicine, media and the public good. Yet, such problems with social media do not change the fact that it is here, used by billions worldwide, and increasingly becoming a vehicle for discussing health matters from heart disease to weight management to smoking cessation.

Social innovation and research

Social innovation has the opposite problem. Vision, ideas, excitement and energy for new ideas abound within this world, yet the evidence generation to support it, improve upon it and foster further design innovations is notably absent (or invisible). Evaluation is not a word that is used much within this sphere nor is the term research applied — at least with the rigour we see in the health field.

In late May I participated in a one-day event in Vancouver on social innovation research in Vancouver organized by the folks at Simon Fraser University’s Public Square program and Nesta as part of the Social Innovation Week Canada events.Part of the rationale for the event can be explained by Nesta on its website promoting an earlier Social Frontiers event in the UK:

Despite thriving practitioner networks and a real commitment from policymakers and foundations to support social innovation, empirical and theoretical knowledge of social innovation remains uneven.

Not only is this research base uneven, it’s largely invisible. I choose to use the word invisible because it’s unclear how much research there is as it simply isn’t made visible. Part of the problem, clearly evident at the Vancouver event, is that social innovation appears to be still at a place where it’s busy showing people it exists. This is certainly an important first step, but as this was an event devoted to social innovation research it struck me that most attendees ought to have already been convinced of that.

Missing was language around t-scores, inter-relater reliability, theoretical saturation, cost-benefit analysis, systematic reviews and confidence intervals – the kind of terms you’d expect to hear at a research conference. Instead, words like “impact” and “scale” were thrown out with little data to back them up.

Bring us down to earth to better appreciate the stars

It seems that social innovation is a field that is still in the clouds with possibility and hasn’t turned the lights on bright enough to bring it back down to earth. That’s the unfortunate part of research: it can be a real buzz-kill. Research and evaluation can confirm what it means for something to ‘work’ and forces us to be clear on terms like ‘scale’ and ‘impact’ and this very often will mean that many of the high-profile, well-intentioned initiatives will prove to be less impactful than we hope for.

Yet, this attention to detail and increase in the quality and scope of research will also raise the overall profile of the field and the quality and scope of the social innovations themselves. That is real impact.

By bringing us down to earth with better quality and more sophisticated research presented and discussed in public and with each other we offer the best opportunity for social innovation to truly innovate and, in doing so, reach beyond the clouds and into the stars.

Photo credit: Lightbulb Clouds by MyCatkins used under Creative Commons License. Thanks Mike for sharing!