Tag: education

complexityeducation & learningemergenceevaluationsystems thinking

Developmental Evaluation and Mindfulness

Mindfulness in Motion

Mindfulness in Motion?

Developmental evaluation is focused on real-time decision making for programs operating in complex, changing conditions, which can tax the attentional capacity of program staff and evaluators. Organizational mindfulness is a means of paying attention to what matters and building the capacity across the organization to better filter signals from noise.

Mindfulness is a means of introducing quiet to noisy environments; the kind that are often the focus of developmental evaluations. Like the image above, mindfulness involves remaining calm and centered while everything else is growing, crumbling and (perhaps) disembodied from all that is around it.

Mindfulness in Organizations and Evaluation

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

Mindfulness does not exist independent of the past, rather it takes account of present actions in light of a path to the current context. As simple as it may sound, mindfulness is anything but easy, especially in complex settings with high levels of information sources. What this means for developmental evaluation is that there needs to be a method of capturing data relevant to the present moment, a sensemaking capacity to understand how that data fits within the overall context and system of the program, and a strategy for provoking curiosity about the data to shape innovation. Without attention, sensemaking or interest in exploring the data to innovate there is little likelihood that there will be much change, which is what design (the next step in DE) is all about.

Organizational mindfulness is a quality of social innovation that situates the organization’s activities within a larger strategic frame that developmental evaluation supports. A mindful organization is grounded in a set of beliefs that guide its actions as lived through practice. Without some guiding, grounded models for action an organization can go anywhere and the data collected from a developmental evaluation has little context as nearly anything can develop from that data, yet organizations don’t want anything. They want the solutions that are best optimized for the current context.

Mindfulness for Innovation in Systems

Karl Weick has observed that high-reliability organizations are the way they are because of a mindful orientation. Weick and Karen Sutcliffe explored the concept of organizational mindfulness in greater detail and made the connection to systems thinking, by emphasizing how a mindful orientation opens up the perceptual capabilities of an organization to see their systems differently. They describe a mindful orientation as one that redirects attention from the expected to the unexpected, challenges what is comfortable, consistent, desired and agreed to the areas that challenge all of that.

Weick and Sutcliffe suggest that organizational mindfulness has five core dimensions:

  1. Reluctance to simplify
  2. Sensitivity to operations
  3. Commitment to resilience
  4. Deference to expertise
  5. Preoccupation with failure

Ray, Baker and Plowman (2011) looked at how these qualities were represented in U.S. business schools, finding that there was some evidence for their existence. However, this mindful orientation is still something novel and its overlap with innovation output, unverified. (This is also true for developmental evaluation itself with few published studies illustrating that the fundamentals of developmental evaluation are applied). Vogus and Sutcliffe (2012) took this further and encouraged more research and development in this area in part because of the lack of detailed study of how it works in practice, partly due to an absence of organizational commitment to discovery and change instead of just existing modes of thinking. 

Among the principal reasons for a lack of evidence is that organizational mindfulness requires a substantive re-orientation towards developmental processes that include both evaluation and design. For all of the talk about learning organizations in industry, health, education and social services we see relatively few concrete examples of it in action. A mistake that many evaluators and program planners make is the assumption that the foundations for learning, attention and strategy are all in place before launching a developmental evaluation, which is very often not the case. Just as we do evaluability assessments to see if a program is ready for an evaluation we may wish to consider organizational mindfulness assessments to explore how ready an organization is to engage in a true developmental evaluation. 

Cultivating curiosity

What Weick and Sutcliffe’s five-factor model on organizational mindfulness misses is the second part of the definition of mindfulness introduced at the beginning of this post; the part about curiosity. And while Weick and Sutcliffe speak about the challenging of assumptions in organizational mindfulness, these challenges aren’t well reflected in the model.

Curiosity is a fundamental quality of mindfulness that is often overlooked (not just in organizational contexts). Arthur Zajonc, a physicist, educator and President of the Mind and Life Institute, writes and speaks about contemplative inquiry as a process of employing mindfulness for discovery about the world around us.Zajonc is a scientist and is motivated partly by a love and curiosity of both the inner and outer worlds we inhabit. His mindset — reflective of contemplative inquiry itself — is about an open and focused attention simultaneously.

Openness to new information and experience is one part, while the focus comes from experience and the need to draw in information to clarify intention and actions is the second. These are the same kind of patterns of movement that we see in complex systems (see the stitch image below) and is captured in the sensing-divergent-convergent model of design that is evident in the CENSE Research + Design Innovation arrow model below that.

Stitch of Complexity

Stitch of Complexity

CENSE Corkscrew Innovation Discovery Arrow

CENSE Corkscrew Innovation Discovery Arrow

By being better attuned to the systems (big and small) around us and curiously asking questions about it, we may find that the assumptions we hold are untrue or incomplete. By contemplating fully the moment-by-moment experience of our systems, patterns emerge that are often too weak to notice, but that may drive behaviour in a complex system. This emergence of weak signals is often what shifts systems.

Sensemaking, which we discussed in a previous post in this series, is a means of taking this information and using it to understand the system and the implications of these signals.

For organizations and evaluators the next step is determining whether or not they are willing (and capable) of doing something with the findings from this discovery and learning from a developmental evaluation, which will be covered in the next post in this series that looks at design.

References and Further Reading: 

Bishop, S. R., Lau, M., Shapiro, S., & Carlson, L. (2004). Mindfulness: A Proposed Operational Definition. Clinical Psychology: Science and Practice, 11(N3), 230–241.

Ray, J. L., Baker, L. T., & Plowman, D. A. (2011). Organizational mindfulness in business schools. Academy of Management Learning & Education, 10(2), 188–203.

Vogus, T. J., & Sutcliffe, K. M. (2012). Organizational Mindfulness and Mindful Organizing : A Reconciliation and Path Forward. Academy of Management Learning & Education, 11(4), 722–735.

Weick, K. E., Sutcliffe, K. M., Obstfeld, D., & Wieck, K. E. (1999). Organizing for high reliability: processes of collective mindfulness. In R. S. Sutton & B. M. Staw (Eds.), Research in Organizational Behavior (Vol. 1, pp. 81–123). Stanford, CA: Jai Press.

Weick, K.E. & Sutcliffe, K.M. (2007). Managing the unexpected. San Francisco, CA: Jossey-Bass.

Zajonc, A. (2009). Meditation as contemplative inquiry: When knowing becomes love. Barrington, MA: Lindisfarne Books.

complexityeducation & learningevaluationsystems science

Evaluation, Evidence and Moving Beyond the Tyranny of ‘I Think’

I think, you think

The concrete evidence for ‘I think’

Good evidence provides a foundation for decision-making in programs that is dispassionate, comparable and open to debate and view, yet often it is ignored in favour of opinion. Practice-based evidence allows that expert opinion in, however the way to get it into the discussion is through the very means we use to generate traditional evidence. 

Picture a meeting of educators or health practitioners or social workers discussing a case or an issue about a program. Envision those people presenting evidence for a particular decision based on what they know and can find. If they are operating in a complex, dynamic field the chances are that the evidence will be incomplete, but there might be some. Once these studies and cases are presented, invariably the switch comes when someone says “I think...” and offers their opinion.

Incomplete evidence is not useful and complex systems require a very different use of evidence. As Ray Pawson illustrates in his most recent book, science in the realm of complexity requires a type of sensemaking and deliberations that differ greatly from extrapolating findings from simple or even complicated program data.

Practice-based evidence: The education case

Larry Green, a thought leader in health promotion, has been advocating to the health services community that if we want more evidence based practice we need more practice based evidence (video). Green argues that systems science has much to offer by drawing in connections between the program as a system and the systems that the program operates in. He further argues that we are not truly creating evidence-based programs without adding in the practice-based knowledge to the equation.

Yet, practice-based evidence can quickly devolve into “I think” statements that are opinion based on un-reflective bias, personal prejudice, convenience and lack of information. To illustrate, I need only consider a curriculum decision-making process at universities (and likely many primary and secondary schools too). Having been a part of training programs at different institutions — in-house degree programs and multi-centre networks between universities — I can say I’ve rarely seen evidence come into play in decisions about what to teach, how to teach, what is learned and what the purpose of the programs are, yet always see “I think”.

Part of the reason for this is that there is little useful data for designing programs. In post-secondary education we use remarkably crude metrics to assess student learning and progress. Most often, we use some imperfect time-series data like assignments that are aggregated together to form a grade. If this is undergraduate education, most likely we are using multiple-choice exams because they are easier to distribute and grade and use within the resource constraints. For graduate students, we still use exams, but perhaps we use papers as well. But rarely, do we have any process data to make decisions on.

Yet, we recruit students based on quotas and rarely look to where the intellectual and career ‘markets’ are going to set our programs. Instead, we use opinion. “I think we should be teaching X” or “I think we should teach X this way” with little evidence for why these decisions are made. It is remarkable how learning — whether formal or informal — in organizations is left without a clear sense of what the point is. Asking why we teach something, why people need to learn something, and what they are expected to do with that knowledge is a question that is well beyond a luxury. To get a sense of this absurdity, NYC principal Scott Conti’s talk at TEDX Dumbo is worth a watch. He points to the mis-match between what we seek to teach and what we expect from students in their lives.

Going small to go big

There is, as Green points out, a need for a systems approach to understanding the problem of taking these anecdotes and opinion and making them useful. Many of the issues with education have to do with resources and policy directions made at levels well beyond the classroom, which is why a systems approach to evaluation is important.Evaluation applied across the system using a systems approach that takes into account the structures and complexity in that system can be an enormous asset. But how does this work for teachers or anyone who operates at the front line of their profession?

Evaluation can provide the raw materials for discussion in a program. By systematically collecting data on the way we form decisions, design our programs, and make changes we create a layer of transparency and an opportunity to better integrate practice-based evidence more fully. The term “system” in evaluation or programming often invokes a sense of despair due to a perception of size. Yet, systems happen at multiple scales and a classroom and the teaching within it are systems.

One of the best ways to cultivate practice-based evidence is to design evaluations that take into account the way people learn and make decisions. It requires initially using a design-oriented approach to paying attention to how people operate in the classroom — both as teachers and learners. From there, we can match those activities to the goals of the classroom — the larger goals, the ones that ask “what’s the point of people being here” and also what the metrics for assessment are within the culture of the school. Next, consider ways to collect data on a smaller scale through things like reflective practice journals, recorded videos of teaching, observational notes, and markers of significance such as moments of insight, heightened emotion, or decisions.

By capturing the small decisions it is possible to generate practice-based evidence that goes beyond “I think”. It also allows others in. Rather than ideas being formed exclusively in one’s own head, we can illustrate where people’s knowledge comes from and permit greater learning from those around that person. Too often, the talents and tools of great leaders and teachers are accessible only in formal settings — like lectures or discussions — and not evident to others in the fire of everyday practice.

What are you doing to support evaluation at a small scale and allowing others to access your practice-based knowledge to create that practice-based evidence?

References:

Green, L. W. (2006). Public health asks of systems science: to advance our evidence-based practice, can you help us get more practice-based evidence? American Journal of Public Health, 96(3), 406–9.

Green, L. W. (2008). Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence? Family practice, 25 Suppl 1(suppl_1), i20–4.

Pawson, R. (2013). The science of evaluation: A realist manifesto. London, UK: Sage Publications.

art & designcomplexitydesign thinkingevaluationsystems thinking

Design Thinking, Design Making

Designing and thinking

Designing and thinking

Critics of design thinking suggest that it neglects the craft of products while advocates suggest that it extends itself beyond the traditional constraints of design’s focus on the brief. What separates the two are the implications associated with making something and the question: can we be good designer thinkers without being good design makers?

A review of the literature and discussions on design thinking finds a great deal of debate on whether it is a fad, a source of innovation salvation, or whether it is a term that fails to take the practice of design seriously.  While prototyping — and particularly rapid prototyping — is emphasized there is little attention to the manner in which that object is crafted. There are no standards of practice for design thinking and the myriad settings in which it could be applied — everything from business to education to the military to healthcare — indicate that there is unlikely to be a single model that fits. But should there be some form of standards?

While design thinking encourages prototyping there is remarkably little in the literature on the elements of design that focus on the made product. Unlike design where there is at least some sense of what makes a product good or not, there are no standards for what ought to emerge from design thinking. Dieter Rams, among the most vocal critics of the term design thinking, has written 10 principles for good design that can be applied to a designed product. These principles include a focus on innovation, sustainability, aesthetics, and usability.

These principles can be debated, but they at least offer something others can comment on or use as foil for critique. Design thinking lacks the same correlate. Is that a good (or necessary) thing?

Designing for process and outcome

Unlike design itself, design thinking is not tied to a particular product profile; it can be used to create physical products as easily as policies and programs. Design thinking is a process that is centred largely on complex, ambiguous problems where success has no pre-defined outcome and the journey has no set pathway. It is for this reason that concepts like best practices are inappropriate for use in design thinking and complex problem solving. Design thinking offers a great deal of conceptual freedom without the pressure to produce a specific outcome that might be proscribed by a design brief.

Yet, design thinking is not design. Certainly many designers draw on design thinking in their work, but there is no requirement to create products using that way of approaching design problems. Likewise, there is little demand for design thinking to produce products that would fit what Dieter Rams suggests are hallmark features of good design. Indeed, we can use design thinking to create many possible futures without a requirement to actually manifest any of them.

Design requires an outcome and one that can be judged by a client (or customer or user or donor) as satisfactory, exemplary or otherwise. While what is considered ‘good design’ might be debated, there is little debate that if a client does not like what is produced that product it is a failure on some level*. Yet, if design thinking produces a product (a design?), what is the source of the excellence or failure? And does it matter if anything is produced at all?

Herein lies a fundamental dilemma of design and design thinking: how do we know when we are doing good or great work?

Can we have good design thinking and poor design making?

The case of the military

Roger Martin, writing in Design Observer, highlighted how design thinking was being applied to the US Army through the adaptation of its Field Operations Manual. This new version was based on principles of complexity science and systems thinking, which encourage adaptive, responsive unit actions rather than relying solely on top-down directives. It was an innovative step and design thinking helped contribute to the development of this new Field Manual.

On discussing the process of developing the new manual (FM-05) Martin writes:

In the end, FM5-0 defines design as “a methodology for applying critical and creative thinking to understand, visualize, and describe complex, ill-structured problems and develop approaches to solve them” (Page 3.1), which is a pretty good definition of design. Ancker and Flynn go on to argue that design “underpins the exercise of battle command within the operations process, guiding the iterative and often cyclic application of understanding, visualizing, and describing” and that it should be “practiced continuously throughout the operations process.” (p. 15-16)

The manual’s development involved design thinking and the process in which it is enacted is based on applying design thinking to field operations. As unseemly as it may be to some, the US Army’s application of design thinking is notable and something that can be learned from. But what is the outcome?

Does a design thinking soldier become better at killing their enemy? Or does their empathy for the situation — their colleagues, opponents and neutral parties — increase their sensitivities to the multiplicities of combat and treat it as a wicked problem? What is the outcome in which design thinking is contributing to and how can that be evaluated in its myriad consequences intended or otherwise? In the case of the US Army it might not be so clear.

Craft

One of terms conspicuously absent from the dialogue on design thinking is craft. In a series of interviews with professionals doing design thinking it was noted that those trained as designers — makers — often referred to ‘craft’ and ‘materials’ in describing design thinking. Those who were not designers, did not**. No assessment can be made about the quality of the design thinking that each participant did (that was out of scope of the study), but it is interesting to note how concepts traditionally associated with making — craft and materials and studios — do not have much parallel discussion in design thinking.

Should they?

One reason to consider craft is that it can be assessed with at least some independence. There is an ability to judge the quality of materials and the product integrity associated with a designed object according to some standards that can be applied somewhat consistently — if imperfectly — from reviewer to reviewer. For programs and policies, this could be done by looking at research evidence or through developmental evaluation of those products. Developmental design, an approach I’ve written about before, could be the means in which evaluation data, rapid prototyping, design excellence and evidence could come together to potentially create more robust design thinking products.

We have little correlates with design thinking assessment.

The danger with looking at evaluation and design thinking is falling into the trap of devising and applying rigid metrics, best practices and the like to domains of complexity (and where design thinking resides) where they tend to fail catastrophically. Yet, there is an equal danger that by not aspiring to vision what great design thinking looks like we produce results that not only fail (which is often a necessary and positive step in innovation if there is learning from it), but are true failures in the sense that they don’t produce excellent products. It is indeed possible to create highly collaborative, design thinking-inspired programs, policies and products that are dull, ineffective and uninspiring.

Where we go and how we get there is a problem for design and design thinking. Applying them both to each other might be a way to create the very products we seek.

* It is interesting to note that Finnish designer Alvar Aalto’s 1933 three-legged children’s stool has been considered both a design flop from a technical standpoint (it’s unstable given its three legs) and one of the biggest commercial successes for Artek, its manufacturer.

** The analysis of the findings of the project are still ongoing. Updates and results will be published on the Design Thinking Foundations project site in the coming months, where this post will be re-published.

art & designcomplexitydesign thinkingevaluationinnovation

Defining the New Designer

Who is the real designer?

It’s been suggested that anyone who shapes the world intentionally is a designer, however those who train and practice as professional designers question whether such definition obscures the skill, craft, and ethics that come from formal disciplinary affiliation. Further complicating things is the notion that design thinking can be taught and that the practice of design can be applied far beyond its original traditional bounds. Who is right and what does it mean to define the new designer?

Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. – Herbert Simon, Scientist and Nobel Laureate

By Herb Simon’s definition anyone who is intentionally directing their energy towards shaping their world is a designer. Renowned design scholar Don Norman (no relation) has said that we are all designers [1]. Defined this way, the term design becomes something more accessible and commonplace removing it from a sense elitism that it has often been associated with. That sounds attractive to most, but is something that has raised significant concerns from those who identify as professional designers and opens the question up about what defines the new designer as we move into an age of designing systems, not just products.

Designer qualities

Design is what links creativity and innovation. It shapes ideas to become practical and attractive propositions for users or customers. Design may be described as creativity deployed to a specific end – Sir George Cox

Sir George Cox, the former head of the UK Design Council, wrote the above statement in the seminal Cox Review of Creativity in Business in the UK in 2005 sees design as a strategic deployment of creative energy to accomplish something. This can be done mindfully, skilfully and ethically with a sense of style and fit or it can be done haphazardly, unethically, incompetently and foolishly. It would seem that designers must put their practice of design thinking into use which includes holding multiple contradictory ideas at the same time in one’s head. Contractions of this sort are a key quality of complexity and abductive reasoning, a means of thinking through such contradictions, is considered a central feature of design thinking.

Indeed, this ‘thinking’ part of design is considered a seminal feature of what makes a designer what they are. Writing on Cox’s definition of design, Mat Hunter, the Design Council’s Chief Design Officer, argues that a designer embodies a particular way of thinking about the subject matter and weaving this through active practice:

Perhaps the most obvious attribute of design is that it makes ideas tangible, it takes abstract thoughts and inspirations and makes something concrete. In fact, it’s often said that designers don’t just think and then translate those thoughts into tangible form, they actually think through making things.

Hunter might be getting closer to distinguishing what makes a designer so. His perspective might assert that people are reflective, abductive and employ design thinking actively, which is an assumption that I’ve found to be often incorrect. Even with professionals, you can instill design thinking but you can’t make them apply it (why this is the case is something for another day).

This invites the question: how do we know people are doing this kind of thinking when they design? The answer isn’t trivial if certain thinking is what defines a designer. And if they are applying design thinking through making does that qualify them as a designer whether they have accreditation or formal training degree in design or not?

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics. Discipline is a cultural code of identity. For the public or consumers of designed products it can also provide some sense of quality assurance to have credentialed designers working with them.

Yet, those who practice what Herb Simon speaks of also are designers of something. They are shaping their world, doing with intent, and many are doing it with a level of skill and attention that is parallel to that of professional designers. So what does it mean to be a designer and how do we define this in light of the new spaces where design is needed?

Designing a disciplinary identity

Writing in Design Issues, Bremner & Rodgers (2013) [2] argue that design’s disciplinary heritage has always been complicated and that its current situation is being affected by three crisis domains: 1) professionalism, 2) economic, and 3) technological. The first is partly a product of the latter two as the shaping and manufacture of objects becomes transformed. Materials, production methods, knowledge, social context and the means of transporting – whether physically or digitally — the objects of design have transformed the market for products, services and ideas in ways that have necessarily shaped the process (and profession) of design itself. They conclude that design is not disciplinary, interdisciplinary or even transdisciplinary, but largely alterdisciplinary — boundless of time and space.

Legendary German designer Dieter Rams is among the most strident critics of the everyone-is-a-designer label and believes this wide use of the term designer takes away the seriousness of what design is all about. Certainly, if one believes John Thackara’s assertion that 80 per cent of the impact of any product is determined at the design stage the case for serious design is clear. Our ecological wellbeing, social services, healthcare, and industries are all designed and have enormous impact on our collective lives so it makes sense that we approach designing seriously. But is reserving the term design(er) for an elite group the answer?

Some have argued that elitism in design is not a bad idea and that this democratization of design has led to poorly crafted, unusable products. Andrew Heaton, writing about User Experience (UX) design, suggests that this elitist view is less about moral superiority and more about better products and greater skill:

I prefer this definition: elitism is the belief that some individuals who form an elite — a select group with a certain intrinsic quality, specialised training, experience and other distinctive attributes — are those whose views on a matter are to be taken the most seriously or carry the most weight. By that definition, Elitist UX is simply an insightful and skilled designer creating an experience for an elevated class of user.

Designing through and away from discipline

Designers recognize that their training and professional development confers specific advantages and requires skill, discipline, craft and a code of ethics, but there is little concrete evidence that it produced better designed outcomes. Design thinking has enabled journalists like Helen Walters, healthcare providers like those at the Mayo Clinic, and business leaders to take design and apply it to their fields and beyond. Indeed, it was business-journalist-cum-design-professor Bruce Nussbaum who is widely credited with contributing to the cross-disciplinary appeal of design thinking (and its critique) from his work at Newsweek.

Design thinking is now something that has traversed whatever discipline it was originally rooted in — which seems to be science, design, architecture, and marketing all at the same time. Perhaps unlocking it from discipline and the practices (and trappings) of such structure is a positive step.

Discipline is a cultural code of identity and for the public it can be a measure of quality. Quality is a matter of perspective and in complex situations we may not even know what quality means until products are developmentally evaluated after being used. For example, what should a 3-D printed t-shirt feel like? I don’t know whether it should be like silk, cotton, polyester, or nylon mesh or something else entirely because I have never worn one and if I was to compare such a shirt to my current wardrobe I might be using the wrong metric. We will soon be testing this theory with 3-D printed footwear already being developed.

Evaluating good design

The problem of metrics is the domain of evaluation. What is the appropriate measurement for good design? Much has been written on the concept of good design, but part of the issue is that what constitutes good design for a bike and chair might be quite different for a poverty reduction policy or perhaps a program to support mothers and their children escaping family violence. The idea of delight (a commonly used goal or marker of good design) as an outcome might be problematic in the latter context. Should mothers be delighted at such a program to support them in time of crisis? That’s a worthy goal, but I think if those involved feel safe, secure, cared for, and supported in dealing with their challenges that is still a worthwhile design. Focusing on delight as a criteria for good design in this case is using the wrong metric. And what about the designers who bring about such programs?

Or should such a program be based on designer’s ability to empathize with users and create adaptive, responsive programs that build on evidence and need simultaneously without delight being the sole goal? Just as healthy food is not always as delightful for children as ice cream or candy, there is still a responsibility to ensure that design outcomes are appropriate. The new designer needs to know when to delight, when and how to incorporate evidence, and how to bring all of the needs and constraints together to generate appropriate value.

Perhaps that ability is the criteria for which we should judge the new designer, encouraging our training programs, our clients (and their asks and expectations), our funders and our professional associations to consider what good design means in this age of complexity and then figure out who fits that criteria. Rather than build from discipline, consider creating the quality from the outcomes and processes used in design itself.

[1] Norman, D. (2004) Emotional design: why we love (or hate) everyday things. New York, NY: Basic Books.

[2] Bremner, C. & Rodgers, P. (2013). Design without discipline. Design Issues, 29 (3), 4-13.

education & learningevaluationsystems thinkingUncategorized

Scaling Education: The Absurd Case of the MOOC

Theatre at the Temple of Apollo

Theatre at the Temple of Apollo

The Chronicle of Higher Education (online) recently reported results of a survey looking at faculty teaching on MOOC’s (massive open online course) and found much interest and expectation around this new format.

The survey, conducted by The Chronicle, attempted to reach every professor who has taught a MOOC. The online questionnaire was sent to 184 professors in late February, and 103 of them responded.

Hype around these new free online courses has grown louder and louder since a few professors at Stanford University drew hundreds of thousands of students to online computer-science courses in 2011. Since then MOOCs, which charge no tuition and are open to anybody with Internet access, have been touted by reformers as a way to transform higher education and expand college access. Many professors teaching MOOCs had a similarly positive outlook: Asked whether they believe MOOCs “are worth the hype,” 79 percent said yes.

The survey of professors was not scientific, particularly because it was of those who are already teaching MOOC’s, but it paints a picture of enthusiasm among those — many who were initially reticent about the potential of online education at that scale – engaged with the medium.

Global Potential vs Global Hype

NY Times columnist Thomas Friedman, never one to resist enthusiasm for global movements, speaks of the MOOC as a revolution. Friedman suggests how a MOOC-driven education system could potentially change foreign aid given its promise to do good things.

Anant Agarwal, the former director of M.I.T.’s artificial intelligence lab, is now president of edX, a nonprofit MOOC that M.I.T. and Harvard are jointly building. Agarwal told me that since May, some 155,000 students from around the world have taken edX’s first course: an M.I.T. intro class on circuits. “That is greater than the total number of M.I.T. alumni in its 150-year history,” he said.

Yes, only a small percentage complete all the work, and even they still tend to be from the middle and upper classes of their societies, but I am convinced that within five years these platforms will reach a much broader demographic. Imagine how this might change U.S. foreign aid. For relatively little money, the U.S. could rent space in an Egyptian village, install two dozen computers and high-speed satellite Internet access, hire a local teacher as a facilitator, and invite in any Egyptian who wanted to take online courses with the best professors in the world, subtitled in Arabic.

For Friedman and many of those writing on MOOCs, the potential for this new format to bring the world’s best higher education to anyone, anywhere, in any country is enormous. It is an example of taking an innovation to scale on perhaps the most extreme. With the click of a mouse the entire world can learn together easily.

Except, it is a lie.

Writing as a guest on the Worldwise blog as part of the Chronicle of Higher Education, professor of writing and rhetoric Ghanashyam Sharma  puts truth to the lie of the modern online education movement’s hype. In an article called A MOOC Delusion: Why Visions to Educate the World Are Absurd, Sharma illustrates the faulty thinking that underpins much of the MOOC enthusiasm for transforming global education. His critique is less about MOOC’s as an educational vehicle in itself, but the global, culture-free scale at which they seek to operate.

There is a dire need for some healthy skepticism among educators about the idea that MOOCs are a wonderful means to go global in order to do good. For our desire to educate the whole world from the convenience of our laptops to be translated into any meaningful effect, we need more research about how students learn in massive open online platforms, and a better understanding of how students from different academic, cultural, social, and national backgrounds fare in such spaces.

Education at Scale

Drawing on his own personal experience with teaching in different contexts, beginning with Nepal and then moving to the University of Louisville and now SUNY Stony Brook, Sharma points to the subtleties in cultural learning styles that did not translate from space to space. He speaks of enormous challenges and, fortunately, the opportunity to meet these with resources to aid him in adapting his teaching from context to context. In doing so, he points to the myth that the global classroom will be filled with people all learning the same way, from a compatible perspective, and using the same language. Even simple differences in the way he presents as a teacher can change the manner in which students learn – and those differences are rooted in culture.

As a teacher myself, I know much of what he writes. Even a ‘simple’ face-to-face graduate course presents considerable pedagogical challenges for me. As an instructor  in public health I need to consider things like:

  • Disciplinary background. Each discipline has communication ‘sub-cultures’ and traditions that differ. A student with a sociology background might be used to rhetoric while one from the basic sciences may be used to communicating through technical reports. Each discipline also uses language in different ways, with terms unique to that discipline.
  • Cultural backgrounds. Students from around the world will present differently in the way they approach the material, the presentation of arguments, and the level of participation in class. Even within a narrow band of cultural contexts that present within my classroom (usually there is perhaps 10-20 per cent of students are international) I am always amazed at the nuances that play out in the classroom. Race, gender and language all add additional complex layers that would require volumes to unpack here.
  • Personality and motivation. Students that are more reserved in class will experience each lesson differently than those who are outgoing. Personality and motivation change the type of discussions learners engage in and shape their interactions with other students. This is not to say that one personality style is better than the other, but whether you’re introverted or extraverted, confident, clear spoken or highly social shapes the classroom. Face-to-face encounters allows for some modulation of these effects to encourage participation with those of different needs.
  • Literacy. Whether it prose, numeracy, or discipline-specific language use, the literacy of students is tested when they have to take in material — whether through lecture, small groups, video, audio or text — and convey arguments.

These are constraints (and opportunities) in most modern universities, whether the course is delivered face-to-face or online. The amount of effort to engage a room full of eager learners is enormous if I attend to these various issues. As an educator, it is effort that I believe makes for a good experience for everyone and is a joyous part of the job. But I am speaking to a room of about 25 graduate students at one of the best universities in the world who are all present and sharing the same environmental context even if it is as a visiting student.

What happens when we are teaching to a ‘room’ of 150,000 people from all over the world?

Understanding scale

I do believe that we can learn much from online courses and that much can be conveyed via the MOOC that brings content to the world in ways that are appropriate and useful for a general audience. I am not against the MOOC, but I do think we need to carefully consider what it means to take education to scale and ask some deeper questions about what the learning experience is intended to achieve.

We need to design our educational experience using the same principles or design thinking we would apply to any other service of value.

The lie is in believing true education of a global audience in the rich way that a university or college intends is not only possible, but appropriate. What is being lost in the effort to make a common experience among a global classroom? Are we just sending out information or are we creating a learning environment? To what degree does the MOOC serve this purpose well. That is what a designer might start with as they move to asking how might we bring learning from one context to another.

Ghanashyam Sharma points to the folly in those who think the jump from culture to culture can be easily made when teaching the world at one time. The MOOC offers enormous potential to re-shape the way people learn and promoting access to content and expertise that many in this world could have only dreamed of years ago. But the naivete that such learning can be done in an monocultural way without losing something special about the context in which people learn and use what they learn might lead to some expensive lessons.

Sharma is right to ask for more research. Thomas Friedman is right to inspire us to think of creative ways to bring the wealth of the west’s educational treasures to the globe. The key is to figure out what parts of this global vision are real or possible, which are illusions, and which are delusions.

Unlike the auditorium at the Temple of Apollo at Delphi (pictured above) the audience for what we teach now are not from one place, time and shared cultural perspective; they represent the world. Without an understanding of what scale means in education we might be producing more ignorance than knowledge.

Photo: Cameron Norman

design thinkingeducation & learning

Hacking the Classroom: Beyond Design Thinking

A nice summation of what Design Thinking is and how its been applied elsewhere with an eye towards education. This is shared from the User Generated Education blog.

User Generated Education

Design Thinking is trending is some educational circles.  Edutopia recently ran a design thinking for educators workshop and I attended two great workshops at SXSWedu 2013 on Design Thinking:

Design Thinking is a great skill for students to acquire as part of their education.  But it is one process like the problem-solving model or the scientific method.   As a step-by-step process, it becomes type of box.  Sometimes we need to go beyond that box; step outside of the box.  This post provides an overview of design thinking, the problems with design thinking, and suggestions to hacking the world to go beyond design thinking.

Design Thinking

Design thinking is an approach to learning that includes considering real-world problems, research, analysis, conceiving original ideas, lots of experimentation, and sometimes building things by hand (http://blogs.kqed.org/mindshift/2013/03/what-does-design-thinking-look-like-in-school). The following graphic…

View original post 1,182 more words

knowledge translationsocial media

Social Media For Researchers

Social Media For Researchers

I recently sat down and chatted with Armine Yalnizyan, a journalist and board member of the Canadian Institutes for Health Research (CIHR) Institute of Public and Population Health (IPPH) to chat about social media for the IPPH about how social tools can assist researchers to do their work, share their learnings, and improve knowledge translation to the community .

Armine kindly referred me to a “rock star social media communicator” but I think we all can play some pretty interesting metaphorical music in our use of social media to assist us with engaging the public. Here is the link to that webinar conversation for those of you interested in understanding more about what social media is and how it works to support the goals of health research more broadly.