Higher education is asking itself some big questions and making substantive changes to the way it sees itself and produces value for society. Education is increasingly being rationalized, which calls into question the metrics that are being used to judge how resources should be allocated. In a previous post, I looked at the jobs metric. Now, it’s time to look at the knowledge metric.
Just the facts
Education writer and teacher Will Richardson‘s TED Book Why School is a provocative read for those connected to teaching or just interested in schooling. While it focuses largely on grade school, the issues are the same for universities and colleges particularly as the primary and secondary students of today are tomorrow’s graduate and professional learners. Richardson questions the role of the school as institution in its current form suggesting that if the status quo — one characterized an information delivery warehouse — is maintained there is little need for schools to exist at all. Yet, if the education within schools is focused on asking better questions and learning when to apply knowledge, not just what knowledge to apply, there is hope.
The current trend in school reform is towards Common Core Standards, which emphasizes specific forms of knowledge, ‘facts’ and asks that students be able to recall such content when required. Under this model, the role of the teacher is one of content manager and facilitator rather than guide or mentor and students are prepped for the tests of their knowledge (memory) rather than be asked to demonstrate its application to anything outside of the test. It is this model that many proponents of online education embrace, because the Internet is a fabulous content delivery system and education can be literally programmed and delivered to students directly without the ‘noise’ that teachers introduce to the signal. Under this model, educational content can be delivered cheaply and widely to support uniform intended effects among learners.
Richardson argues for reforming schools to something closer to the alternative model that was advanced by educational reformer and philosopher John Dewey. Richardson writes:
“In this version of reform, schools and classrooms are seen as nodes in a much larger learning network that expands far beyond local walls. Students are encouraged to connect with others, and to collaborate and create with them on a global scale. It’s not “do your own work,” so much as “do work with others, and make it work that matters.” To paraphrase Tony Wagner, assessments focus less on what students know, and more on what they can do with what they know. And, as Dewey espoused, school is “real life,” not simply a place to take courses, earn grades, amass credits, and compete against others for recognition. There lies the tension.
This second path is simply not as easy to quantify as the first. Developing creativity, persistence, and the skills for patient problem solving, B.S.-detecting, and collaborating may now be more important than knowing the key dates and battles of the Civil War (after all, those answers are just a few taps on our phones away), but they’re all much more difficult to assign a score to. I’m not saying that a foundation of content knowledge isn’t still important. To communicate, function, and reason in the world, students need effective reading and writing skills, as well as a solid foundation in math, science, history, and more. But I’m convinced we must revise the overreaching coursework requirements we place on students — requirements created at a time of scarcity, by the way. And we desperately need to revisit the thinking we’ve developed around assessment that, as Harvard researcher Justin Reich says, “optimizes the measurable at the risk of neglecting the immeasurable.””
Facts vs Problems
The knowledge metric is flawed because it assumes that content solves problems. It also presumes that the curriculum teaches the right knowledge for the right problems and that those problems can be known in advance. Let’s look at these.
One need only look to cigarette smoking as an example of how knowledge alone doesn’t always solve or prevent problems. One would be hard pressed to find anyone over the age of five who doesn’t know that sticking a lit tube of anything in their mouth and sucking on it isn’t at least somewhat unhealthy (and most know it is very unhealthy). An individual’s knowledge of smoking’s effects on physical health may not be complete, but it is often sufficient to inform the decision to quit or not start the unhealthy habit. And yet, citizens in highly educated countries like the United States, Canada and the U.K. smoke more than 1000 cigarettes per year per capita (and over 2700 per capita in places like Russia). These are not countries lacking in information on tobacco and health.
Using students’ ability to recall content makes the presumption that what is contained in a curriculum is what they need to know when they leave their program of study (at least as a start). While it may be somewhat true for students in the humanities and languages, it becomes highly problematic for those in dynamic fields or emergent areas of practice, which is becoming more normal than rare. There is no doubt that a corpus of key concepts, skills and ‘facts’ is useful, but the manner in which this knowledge can and may be applied is changing dramatically. For example, social media has upended communications in ways that very few health professionals are trained for. Journalists are particularly aware of the role that Twitter and related tools have had on their profession.
It also presumes that the content itself is relatively static. Certainly, curriculum renewal is something that most learning institutions engage in, but the primacy of content itself as the driver of education also assumes that the foundation for that knowledge is solid and can be applied today in the manner it was applied yesterday. In dynamic conditions, that isn’t often true. Further, the relevance of knowledge is framed by the problems to which that knowledge is applied. Genetic information, for example, can be incredibly useful when framed against tests that have high confidence, predictability and value to people, yet without such a context it is largely useless to those non-scientists who have it.
Areas of social innovation — which are expanding dramatically in number and scope — illustrate the problem of changing context well. This is a field characterized by problems, problem solving and novelty (which is what innovation is all about). Standard approaches don’t apply easily or at all when we are faced with high levels of novelty. Thinking and re-thinking the problem frame, knowing what to find, where to find it, and the skills to integrate relevant knowledge together is something that is not captured in the knowledge metric. Yet, it is those skills that will lead innovation. Knowledge translation professionals know this and so do knowledge brokers.
Are we designing our educational programming to advance on the kind of design issues of problem framing, finding and solving that our world is facing? Or are we simply taking content that can be obtained through books, the Internet and other materials, repackaging it and creating expensive warehouses of information that take learners out of the world and out of context in the process?
I don’t suggest that universities and continuing education programs stop delivering content, but if knowledge is the metric by which they are judging their success then it behooves educational administrators and funders to justify why they can do it better than other tools. What made sense when content was a rare commodity makes little today when it is overflowing in abundance for little or no cost. Universities and post-graduate training programs have an opportunity to re-imagine education and have the tools to do it in a way that makes learning more powerful and relevant for the 21st century should they choose to change their metrics of success.
How might we take the enormous talent trust that exists among university faculty (and their students) who co-locate (physically, virtually or in some combination) in a school and develop the skills to not only address problems of today, but prepare everyone for possible challenges in the future?
How might we integrate what we know, identify the knowledge we need, and create systems to take advantage of the talent and creativity of individuals to make universities, colleges, and post-professional training venues for innovation and inspiration rather than just content delivery vehicles?
What kind of metrics do we need to evaluate this kind of education should we choose to develop it?
These are questions whose answers might yield more learning than those focused on what knowledge students have when they graduate.
Image source: Shutterstock.
Post-secondary and continuing education is continuing to be rationalized in ways that are transforming the very foundation of the enterprise. Funding is a major driver of change in this field: how much is available, when it flows, where it comes from, what is funded, and who gets the funding are questions on the minds of those running the academy.
At the centre of the focus of this funding issue is the job market. Training qualified professionals for the job market in various forms has been one of the roles a university has played for more than a century. Now that role has become central.
Let’s consider what that means and what it could do in shaping the various possible futures of the university. This second in a series looking at the post-secondary and continuing education focuses on the metrics of jobs.
“What are all these people going do?”
The employability of graduates is now the holy grail of education industry statistics. Earlier this year I was sitting on the stage at an academic convocation with a senior colleague staring out at a sea of soon-to-be-graduates when he leaned over and asked the question quoted above. Staring at a sea of masters and doctoral graduates numbered in the hundreds and knowing that this ceremony was held twice per year, the question stuck and remains without an answer.
Maybe there were enough jobs for that cohort, but this process gets repeated twice each year at universities around the world and each year that I’ve been a professor those numbers (of graduates) seem to go up. Some of our programs in the health sciences are admitting three times the number of students than they were just ten years ago. There is much demand for education (as judged by departmental applications), but are there jobs demanding this kind of education in its current form?
Yes, the Baby Boom is moving into an age of retirement and increasing needs for health services, but do we need to graduate 80+ Physical or Occupational Therapists to meet this need this year? Do we need a few dozen more epidemiologists or health promotion specialists to add to the pool? How about psychologists or social workers: how many of those do we need? The answer from my colleagues in these fields is: We don’t know.
Chasing the Wind
Jobs are a red herring. It’s one thing to have a job, but is it the job that you trained for? (And is having that job even a reasonable goal?) Being employed is not the same as building a career. What if you were trained perfectly for a job that no longer existed? Imagine a Blacksmith in the 20th century or a Bloodletter. These questions are not asked, nor is much asked about quality of education relative to the pressures of recruitment, cost-cutting and educational rationalization. Most of us don’t know what quality education is in real terms because we are measuring it (if we are measuring anything at all besides jobs) by standards set for the jobs of the past, not the future (or even the present?).
“Skate where the puck is going, not where it’s been.” – Wayne Gretzky
Jobs are living things and very few in 2013 will resemble what they did even 10 years ago. The citizens of the developing world are entering this rapidly changing job market ready for change (See also McKinsey Global Institute report on future of work in advanced economies) because they don’t have the old ways to rely on. They are primed for change and if professional education is to meet the needs of a changing world, it needs to change too. It means getting serious about learning.
If education is rationalizing itself to focus more on jobs, then it also needs to get serious about clarifying what jobs mean, defining what ‘success’ looks like for a graduate, and whether those jobs are designed for where the proverbial puck is now or for where it is going.
Disruptive Learning / Disturbed Education
“The Only Thing That Is Constant Is Change -” ― Heraclitus
I’ve pointed out that learners have an uneasy relationship with learning principally because it means disrupting things. This is a topic I’ll be covering in greater depth in a future post, but if one considers how our social, economic, and environmental systems are changing it is not unreasonable to call this the age of disruption .
Change in complex systems is often logarithmic, not linear. It may be massively punctuated like a Lévy Flight or it could be closer to a random walk. In environments with a change coefficient that is large the level of attention must be more fine-grained than 5-year reviews. It requires developmental evaluation methods and learning organizations, not just conventional approaches to generating and assessing feedback. It requires mindful attention and contemplative inquiry to guide a regular reflective practice if one is to pay attention to the subtleties in change that could have enormous impact.
For example, if journalists and news media waited every five years to assess the state of their profession, they would have missed out on Twitter and come late to blogging, two of their (now) powerful sources of competition and tools of the trade. Some have waited, which is why they are no longer around. Metrics for journalism education today might consider the amount of exposure and proficiency in social media use, digital photography, use of handheld tools for communication, and real-time reporting skills. Metrics of the past might focus on newspapers and radio broadcasting. Which mindset, skillset and toolset would you rather be trained in today?
Questions for educators, learners (and evaluators):
Whether health sciences, journalism, human services or any field, what might some questions be that can help determine the role of job training in professional education? Here are five starters:
1. What is the state of your profession right now and are you training people for existing in this state? Are you preparing people for the next evolution?
2. Where is your field of practice going? What are the possible futures for your profession in the next 5, 10, and 20 years? Will it still exist? Are you a blacksmith looking for more horses in the automobile age or Steve Jobs waiting to attract people to a new graphical user interface?
3. Is your mindset, skillset or toolset in need of re-consideration? Does it still do the job you’ve hired it to do?
4. What do people need that your skills can help with? What unfilled needs and expectations are there in the world that your mindset, skillset and toolset could solve?
5. What would happen if your field of practice disappeared? How else could you apply what you know to making the contribution you wish to make and earn a living? What other skills, tools and ways of thinking would you need to adapt?
Design thinking can greatly help shape the way that one conceives of a problem, works through possible options, and develops prototypes to address the needs of the present and the future. Foresight methods help lay additional context for design and systems thinking by providing ways to anticipate possible futures for any given field. Lastly, knowing what the state of things are now and how they got to where they are now can help determine the path dependencies that education may have fallen into.
We can’t change what we don’t see and better foresight, hindsight and present sight is critical to better ensuring that education outcomes are not imagined, but based on something that can actually improve learning.
Earlier this week I has the pleasure of attending talks from Bryan Boyer from the Helsinki Design Lab and learning about the remarkable work they are doing in applying design to government and community life in Finland. While the focus of the audience for the talks was on their application of design thinking, I found myself drawn to the issue of evaluation and the discussion around that when it came up.
One of the points raised was that design teams are often working with constraints that emphasize the designed product, rather than its extended outcome, making evaluation a challenge to adequately resource. Evaluation is not a term that frequents discussion on design, but as the moderator of one talk suggested, maybe it should.
I can’t agree more.
Design and Evaluation: A Natural Partnership
It has puzzled me to no end that we have these emergent fields of practice aimed at social good – social finance and social impact investing, social innovation, social benefit (PDF)– that have little built into their culture to assess what kind of influence they are having beyond the basics. Yet, social innovation is rarely about simple basics, it’s influence is likely far larger, for better or worse.
What is the impact being invested in? What is the new thing being created of value? and what is the benefit and for whom? What else happened because we intervened?
Evaluation is often the last thing to go into a program budget (along with knowledge translation and exchange activities) and the first thing to get cut (along with the aforementioned KTE work) when things go wrong or budgets get tightened. Regrettably, our desire to act supersedes our desire to understand the implication of those actions. It is based on a fundamental idea that we know what we are doing and can predict its outcomes.
Yet, with social innovation, we are often doing things for the first time, or combining known elements into an unknown corpus, or repurposing existing knowledge/skills/tools into new settings and situations. This is the innovation part. Novelty is pervasive and with that comes opportunities for learning as well as the potential for us to good as well as harm.
An Ethical Imperative?
There are reasons beyond product quality and accountability that one should take evaluation and strategic design for social innovation seriously.
Design thinking involves embracing failure (e.g, fail often to succeed sooner is the mantra espoused by product design firm IDEO) as a means of testing ideas and prototyping possible outcomes to generate an ideal fit. This is ideal for ideas and products that can be isolated from their environment safely to measure the variables associated with outcomes, if considered. This works well with benign issues, but can get more problematic when such interventions are aimed at the social sphere.
Unlike technological failures in the lab, innovations involving people do have costs. Clinical intervention trials go through a series of phases — preclinical through five stages to post-testing — to test their impact, gradually and cautiously scaling up with detailed data collection and analysis accompanying each step and its still not perfect. Medical reporter Julia Belluz and I recently discussed this issue with students at the University of Toronto as part of a workshop on evidence and noted that as complexity increases with the subject matter, the ability to rely on controlled studies decreases.
Complexity is typically the space where much of social innovation inhabits.
As the social realm — our communities, organizations and even global enterprises — is our lab, our interventions impact people ‘out of the gate’ and because this occurs in an inherently a complex environment, I argue that the imperative to evaluate and share what is known about what we produce is critical if we are to innovate safely as well as effectively. Alas, we are far from that in social innovation.
Barriers and Opportunities for Evaluation-powered Social Innovation
There are a series of issues that permeate through the social innovation sector in its current form that require addressing if we are to better understand our impact.
- Becoming more than “the ideas people”: I heard this phrased used at Bryan Boyer’s talk hosted by the Social Innovation Generation group at MaRS. The moderator for the talk commented on how she had wished she’d taken more interest in statistics in university because they would have helped in assessing some of the impact fo the work done in social innovation. There is a strong push for ideas in social innovation, but perhaps we should also include those that know how to make sense and evaluate those ideas in our stable of talent and required skillsets for design teams.
- Guiding Theories & Methods: Having good ideas is one thing, implementing them is another. But tying them both together is the role of theory and models. Theories are hypotheses about the way things happen based on evidence, experience, and imagination. Strategic designers and social innovators rarely refer to theory in their presentations or work. I have little doubt that there are some theories being used by these designers, but they are implicit, not explicit, thus remaining unevaluable and untestable or challenged by others. Some, like Frances Westley, have made theories guiding her work explicit, but this is a rarity. Social theory, behaviour change models and theories of discovery beyond just use of Rogers’ Diffusion of Innovation theory must be introduced to our work if we are to make better judgements about social innovation programs and assess their impact. Indeed, we need the kind of scholarship that applies theory and builds it as part of the culture of social innovation.
- Problem scope and methodological challenges with it. Scoping social innovation is immensely wide and complicated task requiring methods and tools that go beyond simple regression models or observational techniques. Evaluators working social innovation require a high-level understanding of diverse methods and I would argue cannot be comfortable in only one tradition of methods unless they are part of a diverse team of evaluation professionals, something that is costly and resource intensive. Those working in social innovation need to live the very credo of constant innovation in methods, tools and mindsets if they are to be effective at managing the changing conditions in social innovation and strategic design. This is not a field for the methodologically disinterested.
- Low attendance to rigor and documentation. When social innovators and strategic designers do assess impact, too often there is a low attention to methodological rigor. Ethnographies are presented with little attention to sampling and selection or data combination, statistics are used sparingly, and connections to theory or historical precedent are absent. Of course, there are exceptions, but this is hardly the rule. Building a culture of innovation within the field relies on the ability to take quality information from one context and apply it to another critically and if that information is absent, incomplete or of poor quality the possibility for effective communication between projects and settings diminishes.
- Knowledge translation in social innovation. There are few fora to share what we know in the kind of depth that is necessary to advance deep understanding of social innovation, regularly. There are a lot of one-off events, but few regular conferences or societies where social innovation is discussed and shared systematically. Design conferences tend towards the ‘sage on the stage’ model that favours high profile speakers and agencies, while academic conferences favour research that is less applied or action-oriented. Couple that with the problem of client-consultant work that is common in social innovation areas and we get knowledge that is protected, privileged or often there is little incentive to add a KT component to the budget.
- Poor cataloguing of research. To the last point, we have no formalized methods of determining the state-of-the-art in social innovation as research and practice is not catalogued. Groups like the Helsinki Design Lab and Social Innovation Generation with their vigorous attention to dissemination are the exception, not the rule. Complicating matters is the interdisciplinary nature of social innovation. Where does one search for social innovation knowledge? What are the keywords? Innovation is not a good one (too general), yet neither is the more specialized disciplinary terms like economics, psychology, geography, engineering, finance, enterprise, or health. Without a shared nomenclature and networks to develop such a project the knowledge that is made public is often left to the realm of unknown unknowns.
Moving forward, the challenge for social innovation is to find ways to make what it does more accessible to those beyond its current field of practice. Evaluation is one way to do this, but in pursuing such a course, the field needs to create space for evaluation to take place. Interestingly, FSG and the Center for Evaluation Innovation in the U.S. recently delivered a webinar on evaluating social innovation with the principle focus being on developmental evaluation, something I’ve written about at length.
Developmental evaluation is one approach, but as noted in the webinar : an organization needs to be a learning organization for this approach to work.
The question that I am left with is: is social innovation serious about social impact? If it is, how will it know it achieved it without evaluation?
And to echo my previous post: if we believe learning is essential to strategic design we must ask: How serious are we about learning?
Tough questions, but the answers might illuminate the way forward to understanding social impact in social innovation.
* Photo credit from Deviant Art innovation_by_genlau.jpg used under Creative Commons Licence.
In complex systems there is a lot to pay attention to. Mindfulness and contemplative inquiry built into the organization can be a way to deal with complexity and help detect the weak signals that will make it thrive and be resilient in the face of challenges.
Most human-centred social ventures spend much of their time in the domain of complexity. What makes these complex is not the human part, but the social. As we interact with our myriad beliefs, attitudes, bases of knowledge, and perceptions we lay the foundation for complexity and the emergent properties than come from it. It’s why we are interesting as a species and why social organizing is such a challenge, particularly when we encourage free-flowing ideas and self-determination. Because of this complexity, we get exposed to a lot of information that gets poorly filtered or synthesized or missed altogether. Yet, it is in this flotsam and jetsam of information that keys to future problems and potential ‘solutions’ to present issues might lie. This is the power of weak signals. But how to we pay attention to these? And what does it matter?
The Strength of Weak Signals
A human social organization, which could mean a firm, a network, or a community — any collection of people that is organized by itself or other means — most likely generates complexity, sometimes often and sometimes occasionally. If we consider the Cynefin Framework, the domain of complexity is where emergent, novel practice is the dominant means of acting. In order to practice effectively within this space, one probes the environment, engages in sensemaking based on that information, and then responds appropriately. Viewed from another perspective, this could easily be used to describe mindfulness practice.
Mindfulness is both a psychological state and activity and a psychospiritual practice. I am using this in the psychological sense, even if one could apply the psychospiritual lens at the same time if they wished. Bishop and colleagues (2004) proposed a two-component definition of mindfulness:
The ﬁrst component involves the self-regulation of attention so that it is maintained on immediate experience, thereby allowing for increased recognition of mental events in the present moment. The second component involves adopting a particular orientation toward one’s experiences in the present moment, an orientation that is characterized by curiosity, openness, and acceptance (p.232)
Weak signals are activities that when observed across conditions reveal patterns that provide beneficial (useful) coherence that has meaningful potential impact on events of significance, yet yield little useful information when observed in discrete events. In other words, these are little things that get spotted in different settings, contexts and times that when linked together produce a pattern that could have meaningful consequences in different futures. By themselves, such signals are relatively benign, but together they reveal something potentially larger.
One reason weak signals get missed is the premature labelling of information as ‘good’ and the constrained definition of what is ‘useful’ based on the current context. Mindfulness practice allows you to transcend the values and judgements imposed on data or information presented in front of you to see it more objectively.
Mindfulness involves quieting the mind and focusing on the present moment, not the past or the possible implications for the future, just the here and now. It is not ahistorical, however. Our past experience, knowledge and wisdom all come to bear on the mindful experience, yet they do not guide that experience.
Experience provides a frame of reference to consider new information, not judge it or apply value to it. It is what allows you to see patterns and derive meaning and sense from what is out there.
Building Mindful Organizations
A review of the research and scholarship on mindfulness finds a nearly exclusive focus on the individual. While there is much literature on the means of using mindfulness and contemplative inquiry as means of being active in the world, this is done largely through mechanisms of individuals coming together as groups, rather than the organizations they form as the focus of analysis.
There is an exception. Social psychologists Weick and Sutcliffe (2007, summarized here and here – PDF) wrote about resiliency in the face of uncertainty using a mindfulness lens to understand how organizations make better sense of what they do and experience in their operations. In their manuscript, Organizing for High Reliability: Processes of Collective Mindfulness (PDF), they lay down a theory for the mindful organization and how it increases the reliability of sensemaking processes when applied to complex informational environments.
They describe the conditions that precipitate mindfulness in organizations this way (p.38):
A state of mindfulness appears to be created by at least ﬁve processes that we have induced from accounts of effective practice in HROs (High Reliability Organizations) and from accident investigations:
1. Preoccupation with failure
2. Reluctance to simplify interpretations
3. Sensitivity to operations
4. Commitment to resilience
5. Underspeciﬁcation of structures
It is notable that the aim here is not to reduce complexity (or impose simplicity), nor is it to focus on ‘positivity’, rather it is focused on events that help contribute to moving in particular direction. In that regard, this is not neutral, but it is not active either. It enables organizations to see patterns, focus on structures and information that encourages resilience to change, and contemplates what that information means (sensemaking) in context. Doing so provides useful information for decision making and taking action, but doesn’t frame information in those terms a priori.
Seeing Beyond Events
At issue is the development of consciousness of what is going on within your organization moment-to-moment, rather than punctuated by events. Events are the emergent properties of underlying patterns of activity. When we spend time attending to events without understanding the conditions that led to those events, we are doing the equivalent of changing the dressing on a wound in the absence of preventing or understanding its cause.
A mindful organization, like the image of the Buddha above, can emphasize the eye, but not at the expense of the rest of the picture. It is attuned to both simultaneously, noting events (e.g., like the square highlighted eye above), but that it is only through the underlying pattern beneath it that the highlighted context makes sense (the rest of the pictured squares). Yet, the only way the organization can learn that the yellow square is different or to ascertain its meaningful significance is through a sense of the whole, not just the part and that is social.
The Curious Organization
Mindfulness and its wider-focused counterpart Contemplative Inquiry both have a root in attending to the present moment, but also in curiosity about the things that is brought to the mind’s attention. It’s not just about seeing, but inquiring. What makes it distinct is that it does not impose judgement on what is perceived not seeking to change it while in that state of mindful awareness. This judgement and imposition of value on to what is going on is where organizations can get trapped.
In complex systems, the meaning of information may change rapidly and is likely uncertainty. The wisdom of experience, shared among others contemplating the same information without judgement, allows for a sensemaking process to unfold that does not impose limitations, yet also keeps a focus on what is going on moment-to-moment. Gathering this data, moment-to-moment, is what developmental evaluation with its emphasis on real-time data collection seeks to do and can serve as a valuable tool for organizing data to allow for a mindful contemplative inquiry into it that will illuminate weak signals.
Creating an organizational culture where open sharing, questioning, experimentation, and attention to the adjacent possibles that come from the data and experiences from operations is the foundation for a mindful organization. This means slowing down, valuing non-doing instead of the constant push to action, cultivating contemplative inquiry and reflection, while also being clear about the directions that matter. Thus, strategy in this case is not divorced from mindfulness, rather it gently frames a directionality of effort. In doing so, it creates possibilities for innovation, attention to quality, and a mechanism for building resiliency within organizations and those working with them and within them.
In creating these mindful systems we move closer to making sense of complexity and better prepare ourselves for social innovation.
The human body is oriented towards forward motion and so too are our institutions, yet while this helps us move linearly and efficiently from place to place, it may obscure opportunities and challenges that come from other directions such as those posed by complexity. Thinking about and re-orienting our perceptions of who we are and where we are going might be the key to understanding and dealing with complexity now and in the future.
When heading out into the turbulent waters that face us we humans tend to look straight ahead and press forward. Our entire physical being and that of all mammals is aimed at facing forward. We look forward, walk forward and this often means thinking forward.
Doing this predisposes us to seeing problems ahead of us or behind us, but is less useful when what challenges us is positioned elsewhere. For this reason, fish and birds, with their eyes on the side of their head, are able to adapt to challenges from nearly any direction quickly. It also allows them to fly/swim in flocks/swarms/schools and operate with high degrees of coordination on a large scale.
These are skills that are useful for handling the social problems that are complex in nature and require mass action to address. But, we don’t have eyes on the side of our head and we tend to look forward or backward to orient ourselves and our activities.
One way this expresses itself in our perceptions of time. Thor Muller, writing in Psychology Today online, highlighted how our perceptions of time influence the way we handle appointments and punctuality with modern technology. Citing the work of anthropologist Edward T. Hall (although mistakenly referring to Manhattan Project contributor Edward Teller), Muller points to the differences in perceived time across cultures and the way that plays out in our treatment of time and technology used to “manage” it and the complexity of everyday life. Monochronistic and polychronistic time orientations matter to whether you see time as a linear, quantifiable phenomenon or a more non-linear, contextual one. One allows you to “bank” time while the other perception deals more with the present moment, less dependent on forward-backward thinking.
Western society and the technologies developed within it are oriented primarily towards dealing with a monochronistic form of time. This works well when patterns, problems and situations have a linear, ordered set of circumstances to them. The cause-and-effect world of normal science fits within this worldview.
Complexity is non-linear and not easily defined in cause-and-effect terms and conditions. Two-dimensional space doesn’t capture complexity the way it can for linear situations. It also means thinking solely in forward and back terms is problematic.
An example of where this comes to conflict is in program planning and evaluation. Traditional evaluation methods and metrics are set up for looking at programs that are planned to start and end with impacts developed and detected in between. This implies a certain level of consistency in the conditions in which that program operates. This control and measure aspect of evaluation is part of the hallmark features of scientific inquiry.
For programs operating in environments of great change and flux, this is a faulty proposition. We cannot hold constant the environment for starters. Secondly, feedback gained from learning about the program as it proceeds is critical to ensuring adaptation and promoting resilience in the face of changing conditions. In these cases, failure to act and adapt on the go may result in a program failing catastrophically.
This is where developmental evaluation comes in. Developmental evaluation works with these conditions to generate data in a manner that programs can make sense of and use to facilitate strategic adaptation rather than simply reacting to changes. As the name suggests, it promotes development rather than improvement.Developmental design is the incorporation of this feedback into an ongoing program development and design process.
Both developmental design and evaluation require ways of seeing the world beyond forward/backward. This seeing comes from understanding where one’s position is in the first place and that requires methods of centring that take us into the world of polychronistic time. One example of a strategy that suits this approach is mindfulness programming. Mindfulness-based programs have shown remarkable efficacy in healing and health interventions aimed at stress reduction across conditions. Mindfulness techniques ranging from meditation to contemplative inquiry (video) brings focus to the present moment away from an orientation towards linear trajectories of time, thought and attention.
Some forms of martial arts promote attentive awareness to the present moment by training practitioners in strategies that are focused on simple rules of engagement, rather than just learning techniques for defence.
These approaches combine inward reflection — reflective practice — with an openness to the data that comes in around them without imposing an order on it a priori. The orientation is to the data and the lessons that come from it rather than its directionality or imposing values on what the data might mean at the start. It means slowing down, contemplating things, and acting on reflection not reacting based on protocol. This is a fundamental shift for many of our activities, but may be the most necessary thing we can focus on if we are to have any hope of understanding, dealing with, and adapting to complexity.
All the methods and tools at our disposal will not help if we cannot change our mindset and orientation — even in the temporary — to this reality when looking at complexity in our work. One of complexity’s biggest challenges right now is that it is seductive in accounting for the massive, dynamic sets of conditions we face every day, yet it lacks methods beyond evaluation to do things with it. The irony of mindfulness and contemplative approaches is that they are less about acting differently and more about seeing things in new ways, yet it is that orientation that is the key to making real change from talking about change. It is the design doing that comes with design thinking and the systems change from systems thinking.
The days of creating programs, products and services and setting them loose on the world are coming to a close posing challenges to the models we use for designing and evaluation. Adding the term ‘developmental’ to both of these concepts with an accompanying shift in mindset can provide options moving forward in these times of great complexity.
We’re at the tail end of a revolution in product and service design that has generated some remarkable benefits for society (and its share of problems), creating the very objects that often define our work (e.g., computers). However, we are in an age of interconnectedness and ever-expanding complexity. Our disciplinary structures are modifying themselves, “wicked problems” are less rare
At the root of the problem is the concept of developmental thought. A critical mistake made in comparative analysis — whether through data or rhetoric — is one that mistakenly views static things to moving things through the same lens. Take for example a tree and a table. Both are made of wood (maybe the same type of wood), yet their developmental trajectories are enormously different.
Tables are relatively static. They may get scratched, painted, re-finished, or modified slightly, but their inherent form, structure and content is likely to remain constant over time. The tree is also made of wood, but will grow larger, may lose branches and gain others; it will interact with the environment providing homes for animals, hiding spaces or swings for small children; bear fruit (or pollen); change leaves; grow around things, yet also maintain some structural integrity that would allow a person to come back after 10 years and recognize that the tree looks similar.
It changes and it interacts with its environment. If it is a banyan tree or an oak, this interaction might take place very slowly, however if it is bamboo that same interaction might take place over a shorter time frame.
If you were to take the antique table shown above, take its measurements and record its qualities and come back 20 years later, you will likely see an object that looks remarkably similar to the one you lefty. The time of initial observation was minimally relevant to the when the second observation was made. The manner by which the table was used will have some effect on these observations, but to a matter of degree the fundamental look and structure is likely to remain consistent.
However, if we were to do the same with the tree, things could look wildly different. If the tree was a sapling, coming back 20 years might find an object that is 2,3,4 times larger in size. If the tree was 120 years old, the differences might be minimal. It’s species, growing conditions and context matters a great deal.
Design for Development / Developmental Design
In social systems and particularly ones operating with great complexity, models of creating programs, policies and products that simply release into the world like a table are becoming anachronistic. Tables work for simple tasks and sometimes complicated ones, but not complex ones (at least, consistently). It is in those areas that we need to consider the tree as a more appropriate model. However, in human systems these “trees” are designed — we create the social world, the policies, the programs and the products, thus design thinking is relevant and appropriate for those seeking to influence our world.
Yet, we need to go even further. Designing tables means creating a product and setting it loose. Designing for trees means constantly adapting and changing along the way. It is what I call developmental design. Tim Brown, the CEO of IDEO and one of the leading proponents of design thinking, has started to consider the role of design and complexity as well. Writing in the current issue of Rotman Magazine, Brown argues that designers should consider adapting their practice towards complexity. He poses six challenges:
- We should give up on the idea of designing objects and think instead about designing behaviours;
- We need to think more about how information flows;
- We must recognize that faster evolution is based on faster iteration;
- We must embrace selective emergence;
- We need to focus on fitness;
- We must accept the fact that design is never done.
Bringing Design and Evaluation Together
Image (Table) Table à ouvrage art nouveau (Musée des Beaux-Arts de Lyon) by dalbera
All used under licence.
Complex concepts like evaluation, design and even complexity itself provide insight, strategies and applications that provide usable solutions to real-world problems, but also suffer from widespread misunderstandings, confusion and even derision. If they are to take hold beyond their initial communities of interest, they need to address their PR problem head on.
This past week was Design Week in Toronto. As one works extensively with design concepts and even has a health promotion-focused design studio, one couldn’t be faulted for thinking that this would be a big week for someone like me who lives and works in the city. Well, it came and went and I didn’t attend a single thing. The reason was partly due to timing and my schedule, but largely because the focus of the week was not really on design writ large, but rather interior design. Sure, there were a few events that focused on social issues (what I am interested in) like the Design With Dialogue session on Designing a Future for our Future, but mostly it was focused on one area of a large field.
And thus, interior design was left to represent all of design.
So why does this matter? It matters a lot because when people hear the term design, most of what was presented this week fits with that perception. The problem is that design is so much more than that. It is about making things, creative thinking and problem tackling (design thinking), social innovation, and responsive planning for complex situations. Architects, business leaders, military strategists, social service agencies and health promoters all engage in design. Indeed, Herbert Simon‘s oft-quoted and often contested definition fits nicely here:
Everyone designs who devises courses of action aimed at changing existing situations into preferred ones
If one accepts that we are all designers and all of what we create and use for change is design, than a week devoted to the topic should offer much more than innovative concepts in furniture or flooring. Yet, this high-concept style showcase is what most people think about when they first hear design. Give people a choice between a Philippe Stark Juicy Salif citrus juicer and creating a trades-based, social change curriculum for low-income kids such as the work by Emily Pilloton as the example of design and they will probably guess think Stark over Pilloton, when both are equally valid examples.
Evaluation (another area I focus my work on) is equally fraught with perception problems. If you want to raise someone’s blood pressure or heart rate, tell them that either they or their work will be the focus of an evaluation. Evaluation may be the longest four-letter word in the English language. Yet, tell someone that you have a strategy that can enable people to learn about what they do, its impact, and provide intelligence on ways to improve, adapt and outperform their competitors and you’ll find an inspired audience for evaluation services.
Lastly, complexity presents the same challenge. It’s very name — complexity — can make people shy away from it. As humans, we crave the simple in most things as it is easier to understand, manage and control. Complexity offers none of these things and, if anything, reveals how little control we have. Entire fields of inquiry have been established around complexity science and its related theories and practices. Complexity can help us make sense of why things don’t work as we think they ought to and allow us to better navigate through unpredictable terrain with greater resilience than if we tried to tackle such problems as if they were linear in their cause and consequence.
In all of these cases — design, evaluation and complexity — there exists a PR problem. The advantages that they pose are tremendous, yet these concepts are frequently misunderstood, dismissed, or inappropriately used . When this happens, it creates even greater distance between the potential benefits these concepts offer and their real-world application.
This distance is partly an artefact of poorly articulated definitions and examples, but also by design (no pun intended). There are those who relish having these concepts appear opaque to those outside of their social cluster. Thus, we have the ‘superstar designer’ who seeks to create products and personas that are built upon their rarity, rather than accessibility. There are evaluators who exploit the fear that people have of evaluation and lack the understanding of the methods and practices of evaluation (vs concepts like research or innovation consulting) to gain contracts and social influence within their field. Complexity, with its foundations in physics and systems biology, can appear to the layperson as otherworldly, making its practitioners and scientists seem all the more powerful and smart. These tactics benefit a small ‘elite’(?) number of professionals, while robbing a far larger audience of the potential benefits.
In 1969, then president of the American Psychological Association, George Miller, implored members to “give psychology away“. His message was that psychology was too important to be left just to the professional, graduate-trained practitioners to use. If psychology was to confer social benefits, it was necessary to ensure that everyone had access to it — it’s theories, methods, models and treatments. It is perhaps no surprise that psychology remains one of the most popular undergraduate degree programs in the arts and social sciences and the focus of television shows, magazines and and an array of services. Miller was commenting on the need to change a field that he perceived was becoming elitist and not serving the needs of society.
The same might be true of design, evaluation and complexity if we let it. It’s not a surprise that these three concepts are intimately tied together, as those training to apply design thinking and strategic foresight learn. Perhaps its time to start giving these ideas away, but to do so we first need to rehab their image and apply some design thinking and brand development strategy to all three ideas. As practitioners in any or all of these fields, giving away what we do by educating, reinforcing, and ensuring that the work we do is of the highest quality is a way to lead by example. None of us is likely to change things by ourselves, but together we can do wonders.
For those interested in evaluation, I suggest catching up on the AEA365 blog sponsored by the American Evaluation Association, where evaluation bloggers and practitioners share ideas about how to practice evaluation, but also how to communicate it to others. For those interested in design, I would encourage you to look at places like the Design Thinkers LinkedIn group, where practitioners from around the world discuss innovations and way to promote and apply design thinking. A similar group, and opportunity, exists with the Systems Thinking LinkedIn group or by joining the Plexus Institute, which does considerable work to promote complexity and systems thinking in North America.
Planning works well for linear systems, but often runs into difficulty when we encounter complexity. How do we make use of plans without putting too much faith in their anticipated outcome and still design for change and can developmental design and developmental evaluation be a solution?
It’s that time of year when most people are starting to feel the first pushback to their New Year’s Resolutions. That strict budget, the workout plan, the make-time-for-old-friends commitments are most likely encountering their first test. Part of the reasons is that most of us plan for linear activities, yet in reality most of these activities are complex and non-linear.
A couple interesting quotes about planning for complex environments:
No battle plan survives contact with the enemy – Colin Powell
In preparing for battle I have always found that plans are useless, but planning is indispensable – Dwight D. Eisenhower
Combat might be the quintessential complex system and both Gens Powell and Eisenhower knew about how to plan for it and what kind of limits planning had, yet it didn’t dissuade them from planning, acting and reacting. In war, the end result is what matters not whether the plan for battle went as outlined (although the costs and actions taken are not without scrutiny or concern). In human services, there is a disproportionate amount of concern about ‘getting it right’ and holding ourselves to account for how we got to our destination relative what happens at the destination itself.
Planning presents myriad challenges for those dealing with complex environments. Most of us, when we plan, expect things to go according to what we’ve set up. We develop programs to fit with this plan, set up evaluation models to assess the impact of this plan, and envisage entire strategies to support the delivery and full realization of this plan into action. For those working in social innovation, what is often realized falls short of what was outlined, which inevitably causes problems with funders and sponsors who expect a certain outcome.
Part of the problem is the mindset that shapes the planning process in the first place. Planning is designed largely around the cognitive rational approach to decision making (PDF), which is based on reductionist science and philosophy. Like the image above, a plan is often seen as a blueprint for laying out how a program or service is to unfold over time. Such models of outlining a strategy is quite suitable for building a physical structure like an office where everything from the materials to the machines used to put them together can be counted, measured and bound. This is much less relevant for services that involve interactions between autonomous agents who’s actions have influence on the outcome of that service and that result might vary from context to context as a consequence.
For evaluators, this is problematic because it reduces the control (and increases variance and ‘noise’) into models that are designed to reveal specific outcomes using particular tools. For program implementers, it is troublesome because rigid planning can drive actions away from where people are and for them into activities that might not be contextually appropriate due to some change in the system.
For this reason the twin concepts of developmental evaluation and developmental design require some attention. Developmental evaluation is a complexity-oriented approach to feedback generation and strategic learning that is intended for programs where there is a high degree of novelty and innovation. Programs where the evidence is low or non-existent, the context is shifting, and there are numerable strong and diverse influences are those where developmental evaluations are not only appropriate, but perhaps one of the only viable models of data collection and monitoring available.
Developmental design is a concept I’ve been working on as a reference to the need to incorporate ongoing design and re-design into programs even after they have been initially launched. Thus, a program evolves over time drawing in information from feedback gained through processes like evaluation to tweak its components to meet changing circumstances and needs. Rather than have a static program, a developmental design is one that systematically incorporates design thinking into the evolutionary fabric of the activities and decision making involved.
Both developmental design and evaluation work together to provide data required to allow program planners to constantly adapt their offerings to meet changing conditions, thus avoiding the problem of having outcomes becoming decoupled from program activities and working with complexity rather than against it. For example, developmental evaluation can determine what are the key attractors shaping program activities while developmental design can work with those attractors to amplify them or dampen them depending on the level of beneficial coherence they offer a program. In two joined processes we can acknowledge complexity while creating more realistic and responsive plans.
Such approaches to design and evaluation are not without contention to traditional practitioners, leaving questions about the integrity of the finished product (for design) and the robustness of the evaluation methods, but without alternative models that take complexity into account, we are simply left with bad planning instead of making it like Eisenhower wanted it to be: indispensable .
Are you an evaluator and do you blog? If so, the American Evaluation Association wants to hear from you. This CENSEMaking post features an appeal to those who evaluate, blog and want to share their tips and tricks for helping create a better, stronger KT system.
Build a better moustrap and the world will beat a path to your door — Attributed to Ralph Waldo Emerson
Knowledge translation in 2011 is a lot different than it was before we had social media, the Internet and direct-to-consumer publishing tools. We now have the opportunity to communicate directly to an audience and share our insights in ways that go beyond just technical reports and peer-reviewed publications, but closer to sharing our tacit knowledge. Blogs have become a powerful medium for doing this.
I’ve been blogging for a couple of years and quite enjoy it. As an evaluator, designer, researcher and health promoter I find it allows me to take different ideas and explore them in ways that more established media do not. I don’t need to have the idea perfect, or fully formed, or relevant to a narrow audience. I don’t need to worry about what my peers think or my editor, because I serve as the peer review, editor and publisher all at the same time.
I originally started blogging to share ideas with students and colleagues — just small things about the strange blend of topics I engage in that many don’t know about or understand or wanted to know more of. Concepts like complexity, design thinking, developmental evaluation, and health promotion can get kind of fuzzy or opaque for those outside of those various fields.
Blogs enable us to reach directly to an audience and provide a means of adaptive feedback on ideas that are novel. Using the comments, visit statistics, and direct messages sent to me from readers, I can gain some sense of what ideas are being taken up with people and which one’s resonate. That enables me to tailor my messages and amplify those parts that are of greater utility to a reader, thus increasing the likelihood that a message will be taken up. For CENSEMaking, the purpose is more self-motivated writing rather than trying to assess the “best” messages for the audience, however I have a series of other blogs that I use for projects as a KT tool. These are, in many cases, secured and by invitation only to the project team and stakeholders, but still look and feel like any normal blog.
As a KT tool, blogs are becoming more widely used. Sites like Research Blogging are large aggregations of blogs on research topics. Others, like this one, are designed for certain audiences and topics — even KT itself, like the KTExchange from the Research Into Action Action initiative at the University of Texas and MobilizeThis! from the Research Impact Knowledge Mobilization group at York University.
The American Evaluation Association has an interesting blog initiative led by AEA’s Executive Director Susan Kistler called AEA365, which is a tip-a-day blog for evaluators looking to learn more about who and what is happening in their field. A couple of years ago I contributed a post on using information technology and evaluation and was delighted at the response it received. So it reaches people. It’s for this reason that AEA is calling out to evaluation bloggers to contribute to the AEA365 blog with recommendations and examples for how blogging can be used for communications and KT. AEA365 aims to create small-bite pockets of information that are easily digestible by its audience.
If you are interested in contributing, the template for the blog is below, with my upcoming contribution to the AEA365 blog posted below that.
By embracing social media and the power to share ideas directly (and done so responsibly), we have a chance to come closer to realizing the KT dream of putting more effective, useful knowledge into the hands of those that can use it faster and engage those who are most interested and able to use that information more efficiently and humanely.
Interested in submitting a post to the AEA365 blog? Contact the AEA365 curators at email@example.com.
Template for aea365 Blogger Posts (see below for an example)
[Introduce yourself by name, where you work, and the name of your blog]
Rad Resource – [your blog name here]: [describe your blog, explain its focus including the extent to which it is related to evaluation, and tell about how often new content is posted]
Hot Tips – favorite posts: [identify 3-5 posts that you believe highlighting your blogging, giving a direct link and a bit of detail for each (see example)]
- [post 1]
- [post 2]
Lessons Learned – why I blog: [explain why you blog – what you find useful about it and the purpose for your blog and blogging. In particular, are you trying to inform stakeholders or clients? Get new clients? Provide a public service? Help students?]
Lessons Learned: [share at least one thing you have learned about blogging since you started]
Remember – stay under 450 words total please!
My potential contribution (with a title I just made up): Cameron Norman on Making Sense of Complexity, Design, Systems and Evaluation: CENSEMaking
Rad Resource – [CENSEMaking]: CENSEMaking is a play on the name of my research and design studio consultancy and on the concept of sensemaking, something evaluators help with all the time. CENSEMaking focuses on the interplay of systems and design thinking, health promotion and evaluation and weaves together ideas I find in current social issues, reflections on my practice as well as the evidence used to inform it. I aspire to post on CENSEMaking 2-3 times per week, although because it is done in a short-essay format, find the time can be a challenge.
Hot Tips – favorite posts:
- What is Developmental Evaluation? This post came from a meeting of working group with Michael Quinn Patton and was fun to write because the original exercise that led to the content (described in the post) was so fun to do. It also provided an answer to a question I get asked all the time.
- Visualizing Evaluation and Feedback. I believe that the better we can visualize complexity the more feedback we provide, the greater the opportunities we have for engaging others, and more evaluations will be utilized. This post was designed to provoke thinking about visualization and illustrate how its been creatively used to present complex data in interesting and accessible ways. My colleague and CENSE partner Andrea Yip has tried to do this with a visually oriented blog on health promoting design, which provides some other creative examples of ways to make ideas more appealing and data feel simpler.
- Developmental Design and Human Services. Creating this post has sparked an entire line of inquiry for me on bridging DE and design that has since become a major focus for my work. This post became the first step in a larger journey.
Lessons Learned – why I blog: CENSEMaking originally served as an informal means of sharing my practice reflections with students and colleagues, but has since grown to serve as a tool for knowledge translation to a broader professional and lay audience. I aim to bridge the sometimes foggy world that things like evaluation inhabit – particularly developmental evaluation – and the lived world of people whom evaluation serves.
Lessons Learned: Blogging is a fun way to explore your own thinking about evaluation and make friends along the way. I never expected to meet so many interesting people because they reached out after reading a blog post of mine or made a link to something I wrote. This has also led me to learn about so many other great bloggers, too. Give a little, get a lot in return and don’t try and make it perfect. Make it fun and authentic and that will do.