The gap between what academics do and what those outside of the academy think they do is enormous. The mysteriousness and elite status that universities enjoy may actually serve to undermine the very values of inquiry and education that it seeks to promote. In this second in series of posts on academic life, I take you behind the curtain of Academic Land of Oz to illustrate what life for at least one professor looks like.
‘Cause you’re working
Building a mystery
Holding on and holding it in
Yeah you’re working
Building a mystery
And choosing so carefully
The academic world has been my home for my entire adult life and one that I helped to build and shape along with my peers with the aim of making a contribution to our collective knowledge, the education of (mostly) young professionals, and hopefully enriching all of our lives along the way with insight drawn from research. This is what the public thinks happens in universities and, to a large extent, they are right. But the way this is done, the roles people play, and the manner in which the academic system is designed and operates is as much of a mystery you will find in our society. But perhaps its time to (un)build it**.
And unlike the Wizard of Oz, this mystery does more to harm those both building it and experiencing it from the outside. How? In part, because times are changing quickly and public institutions along with it. When times are tight, there is little appetite to support professors sitting in their offices, thinking deep thoughts, doing research that has tangential value for society, teaching badly to undergrads and only to small groups of grad students, and taking four months off in the summer and three during the December holidays.
The first part of the problem is that this perception is widely off the mark from reality.
The second part is that universities seem to be doing a poor job of correcting this perception.
For starters, universities are investing a lot less in faculty than people think. In my six years, my university itself only picked up only a small portion of my salary. The rest was through a philanthropic donation, salary awards I earned from both government-funded research programs (e.g., the Canadian Institutes of Health Research), contracts with community service groups, or sometimes from grants. Unlike other countries, Canada doesn’t have a system where investigators can easily draw a salary from the operating grants they receive. Thus, I could afford research assistants, equipment and travel, so long as I didn’t get paid.
To cover this, I had to get separate career awards to pay for my salary and as these awards typically covered less than 50% of my wage, I needed multiple revenue streams at the same time. This meant writing 2-3 times the number of grants that a tenured faculty would have to write. To make matters worse, there are a lot more people in my position than there are tenured faculty so the competition was and is stiff.
In the current CAUT Bulletin, Tom Booth writes about this further in the context of academic freedom and the US system:
It is disturbing to note that only 41 per cent of faculty members in universities in the U.S. are tenured or tenure stream. The majority of those will be retiring in the next 10 years and unless the current trend to replace tenured academic staff with non-tenure track appointments is reversed, the next decade will likely see tenured faculty representing only 20 per cent of American university teaching and research staff.
Earlier research by Harold Bauder (PDF) on academic labour segmentation in Canada found, among other things:
In Canada, academic labour has been depreciating over the previous decades. For example, faculty salaries declined relative to total expenditures of universities, from more than 31 percent in the late 1970s to roughly 19 percent in 2004 (CAUT, 2006, 4). In addition, the faculty-student ratio at Canadian universities has changed. While in the 1992-1993 academic year there were on average only 18.8 full-time students for every full-time faculty member, eleven years later there were 23.7 (CAUT, 2006, 51).
For more on the problematic faculty math in Canada, check out the CAUT’s report on the state of university teaching (PDF).
But the research side of the equation isn’t faring much better. Last February I profiled the declining state of things in the United States, which is mirroring Canada. Scientists Johannes Wheeldon and Richard Gordon recently pointed this out in a column in the Huffington Post, stating:
The role of research funding to an academic’s career has never been more important, and yet there is an emerging consensus that the way we organize our system of research grants is broken. While concerns about Canada’s model of research funding are longstanding, in recent years they have become increasingly stark. These include perpetual underfunding, charges of bias, and an over-reliance on the peer review system, which favours orthodoxy over innovation.
In short: if you’re a young researcher your share of the funding pie is smaller than ever. If you want to innovate, your prospects are even worse.
Yes, but what about academic freedom? That does exist, for now. In all my years at my university my boss (the Chair or Director) came to visit me only a handful of times. No one checks when I arrive or leave, nobody even cares if I work from home or a desert island. As long as I show up for my teaching duties, respect academic procedures, and continue to produce good research, the university system doesn’t much care what I do with my day-to-day activities. That is a real blessing and supports creative thinking about big problems.
Yet, while I could sleep in almost any day, I never did. I could take a long weekend anytime, but instead was in the office. Visiting a cottage? Sure, so long as there was Internet access and plugs for a laptop. See the world! — just make sure you keep on top of your email. Family time is wonderful as long as there’s time to write before and after. My average workweek was 90 hours for the past two years. And while work does inspire me, too much of anything is not good for long periods of time. Oh yes, and did I mention that I study health promotion? The power of social norms, and of what Pierre Bourdieu calls habitus, is akin to the Death Star‘s tractor beam, only you don’t see it; it’s deep within us.
None of these were enforced activities, but they are the norm. My faculty colleagues — young and old, tenured or not — work long hours all year. The system is set up for it. For example, the Tri-Council grants in Canada — SSHRC, CIHR, and NSERC – and many of the major health charities that fund research all have deadlines that require registration (pre-proposals) at times between August 15th and October 30th, which happens to coincide with things like: 1) summer vacation for most North Americans and Europeans (in August and the months before when you organize the research plan), 2) start of classes and the academic year, 3) orientation of new students, and 4) student awards and bursaries (for which we serve as referees to write letters of support). Just try and get a date for anything longer than lunch with an academic doing research during this time.
Grading? Our exams and papers are due at my institution on December 21, which means Hanukah, Christmas, Kwanzaa, and New Years Day are all grading holidays. Pass the gravy on this turkey.
And we are the ones who invented this system!
None of what I am writing is meant to garner sympathy for me personally. I made these choices in my career with a hope it would lead to something good for the world, myself and those I care about. Sometimes I succeeded and others not, but they were my choices that I live with, whether wise or not. What I am doing is trying to paint the picture for others about the environment that I and other faculty and staff like me live in every day. This is not the idyllic life that the public thinks it is. And while the professor is still among the most respected professions out there, it will fall flat if times get tight and people are looking for more for less and we faculty are seen (misguidedly) as having more while others have less.
But what about pay? That’s a tricky one. I get a wage that I have no complaints about in absolute terms. I make well above the Canadian average, but not something that is anywhere close to being indexed to education. Considering I have 16 years of post-secondary education (education that I paid to have), I could have done a lot better going into other fields. But as a wise colleague of mine once said about pay in professor-dom: “you won’t get rich, but you’ll never be poor” . That counts for something.
At the same time, on an hourly basis, my pay goes downhill. And at some point, time becomes worth far more than anything I have to offer financially. I also have the support to spend money on my job. Indeed, teaching supplies, continuous learning, staff rewards (and continued education for them), and the incidentals from the job cost money for which there are few mechanism to pay from in most traditional centres. They come from somewhere and that’s the faculty member’s pocket, just as elementary and secondary school teachers often pay for school supplies. We believe so strongly in what we do we’ll do it without recognition or compensation.
We are a tribe that is as foreign to the public as the San people in Africa were to the first European explorers. But like a tribe we have behaviours that are not always pro-social.
Just as members of street gangs earn most of their livelihood from theft, academics gain most of theirs from careers. Being a member in good standing of a gang and a supergang is crucially important for advancement of one’s career. There is little chance of advancement in the academy without hard work, but flaunting membership in gang and clan can certainly supplement or even substitute for talent and intelligence. Clearly and repeatedly showing one’s loyalty to these groups can be most helpful in obtaining research grants and acceptance of publications, twin lifebloods of the academic career. – Scheff, T.J. (1995), Academic gangs. Crime, Law, and Social Change 23: 157-162.
It is a strange space to be in. Alien.
While I don’t particularly like the system we’ve created, it is what it is — today. But it can change if we — all of us — stop and pay attention to what it really is and work to make it what we want it to be. Well established institutions are hard to change because the practices within them are so deeply entrenched in a culture that is often accepted as is.
As this series unfolds, I’ll explore some more of these themes in detail.
The message to my fellow academics is this:
The modern university system has a lot of problems, yet our mandate and potential to contribute to the world through our research, teaching and social consulting is as big and needed as ever. Society needs us when we’re at our best, but we are doing more to undermine our best at our peril. We need to fix the system now otherwise forces beyond ourselves will force the changes on us in ways that may not be conducive to good scholarship, equity, and effective public service.
For those who like the system as it is, let me leave you with this quote from Guiseppe di Lampedusa’s book, The Leopard:
If we want things to stay as they are, things will have to change..
I don’t think we want things to stay as they are. But, we do want some things to stay the same.
This is the latest in the Alien Shores series of reflections on life in academia from one who is about to leave it.
** and yes, I know that un-building is not correct use of the English language. But deconstruct, take down, demolish or pull apart don’t work here. I am using my academic privilege to make words up
Unravel the mystery and crank up Sarah McLachlan and think about what these words mean for our business…
Sarah McLachlan “Building a Mystery”: excerpt
You live in a church
Where you sleep with voodoo dolls
And you won’t give up the search
For the ghosts in the halls
You wear sandals in the snow
And a smile that won’t wash away
Can you look out the window
Without your shadow getting in the way?
You’re so beautiful
With an edge and charm
But so careful
When I’m in your arms
‘Cause you’re working
Building a mystery
Holding on and holding it in
Yeah you’re working
Building a mystery
And choosing so carefully
Of the many persistent myths about innovation, the lone genius is about the most sticky. Continued research shows how untrue this is.
When we consider achievement in science, we think of individuals. The Nobel Prize might be awarded to small groups of individuals, but they are not awarded to teams. Indeed, team science is not something recognized in the same way that we recognize individual achievement.
There is a persistent myth that discovery is best achieved through individual genius applied to a problem. The “eureka!” moment is played up in science fiction stories and films from Frankenstein to Back to the Future.
It is a myth because research on creativity and innovation consistently shows that hard work and persistence beats out raw skill (which is a myth in itself) . Indeed, people become skilled through some natural talent, but mostly hard work, concentration and consistent practice.
Yesterday, Keith Sawyer, a psychologist and researcher of innovation and creativity, asked that the lone genius myth to be put to rest. I wish that luck and support the idea, but don’t suspect it will come true. He writes (on artists):
The Wall Street Journal of June 3, 2011 reports that many contemporary artists use an equally collaborative studio system. The article (by Stan Stesser) reports that Jeff Koons has 150 people on his payroll and readily admits that he never paints himself. A long list of expensive, widely collected artists are named in the article; apparently, it is not a secret that the “artist” doesn’t actually execute the work himself. There’s no misreprentation here; gallery owners and dealers tell potential buyers the actual story, and buyers still collect the works.
The earliest return to a collaborative studio model was probably Andy Warhol in the 1960s, who called his studio “The Factory” and famously said “I want to be a machine.” So the “lone genius” model of the painter has been fading for several decades already. In the greater scheme of history, the Romantic era belief that the painter was an inspired solitary genius has been a small blip: slightly over 100 years. Painter as lone genius: Rest In Peace.
The notion that creativity is an individual thing might have to do with the uniqueness in which we all experience creativity. While each of us might create a idea from different things — ideas, feelings, abstract experience — we put such sensory stimuli together in manners that must make coherent sense to understand them. To communicate them, we need to make sure that these ideas are coherent beyond ourselves to others and the best way to do that is to get the input of others in the process. It means, collaboration.
Whether it is science or art, the notion of teams, collaboration, sharing, and co-creating is something too often denied or left unexplored in daily practice. In its place, the myths of the lone genius. It is why faculty are rewarded for being the first author on a paper or grant rather than being part of a team or collaboratory. It is why we do individual performance appraisals for most people. It is also why students are graded on their own work, with little attention to how they brought that work to others to share.
Dr. Sawyer’s declaration of the lone genius as dead is optimistic. What is needed now is something to make it realistic.
Interested in learning more about this topic? Visit the library section of Censemaking.
IDEO’s CEO Tim Brown recently observed a renewed interest in design within science, but is that same feeling reciprocated and, if so, what does that mean for both fields?
Tim Brown, author and CEO of the renowned design firm IDEO, recently posted on the firm’s blog some observations he had on the relationship between design and science.
In that post, he asks some important questions of both designers and scientists.
I wonder how much might be gained if designers had a deeper understanding of the science behind synthetic biology and genomics? Or nanotechnology? Or robotics? Could designers help scientists better see the implications and opportunities of the technologies they are creating? Might better educated and aware designers be in a position to challenge the assumptions of the science or reinterpret them in innovative ways? Might they do a better job of fitting the new science into our lives so that we can gain more benefit?
The question of the relationship between designers and the science used to inform the materials or products they us is one that will play out differently depending on the person and context. However, I would welcome the opportunity for designers to challenge much of what science — and I use that term broadly — does, particularly with regards to the application or translation of scientific research into policies and practices. Indeed, this is a frontier where designers have tremendous opportunities to contribute as I’ve discussed elsewhere.
Knowledge translation and translational research are two of the most vexing problem domains in science, particularly with health. Despite years of efforts, scientists haven’t been able to advance the integration of what is learned into what is done at a rate that is acceptable to policy makers, practitioners and the public alike. The problem isn’t just with scientists, but the way the scientific enterprise has been engineered.
Scientists haven’t had to consider design before. Tim Brown asks further questions about what it might be like if they did:
If scientists were more comfortable with intuitive nature of design might they ask more interesting questions? The best scientists often show great leaps of intuition as they develop new hypotheses and yet so much modern science seems to be a dreary methodical process that answers ever more incremental questions. If scientists had some of the skills of designers might they be better able to communicate their new discoveries to the public?
In this case, it might be the chance for designers to step up and consider ways to work with those in science to create better institutional policies, laboratories, and collaborative environments to foster the kind of linkages necessary for effective knowledge translation.
Knowledge translation models, such as the widely cited one conceived of by the Canadian Institutes for Health Research, are both process and outcome oriented; ideal for designers. KT is a designed process and the more it is approached through the lens of design thinking, the greater likelihood we’ll get a system that reflects its intentions better than what we currently have.
This week Watson, the latest IBM-created supercomputer trounced two of Jeopardy’s greatest champions at their own game. A sign of the dawn of true artificial intelligence or a red herring?
It seems that this was not a great week for human decision making. After much hype, the human race lost its supreme battle for supremacy on Jeopardy to a computer named Watson. Just as it was said when Garry Kasparov was beaten by another IBM product, Deep Blue, in 1997 as a match of chess supremacy, computers are now considered to be nearing the smarts of humans. It sounds plausible when you consider what kind of knowledge that Jeopardy champions Brad Rutter and Ken Jennings.
The problem is that Jeopardy is about knowledge of facts organized in a catalogue manner. With all due respect to Mssrs Rutter and Jennings, the ability to learn facts is relatively straightforward, even if nearly 99% of the population can’t do it as well as they can. It is a simple input-storage-output problem, whereby data is entered, encoded and stored, and then brought forth upon request. It is the classic example of something that might be simple, but not easy. Clearly, the fact that it took many years of programming to create a computer that could compete with humans shows how difficult the task is.
But the discourse that suggests that Watson and its progeny are about to displace humans is misguided for many reasons, but most notably because of the nature of the problem at hand. Computers are pretty good — perhaps outstanding — at organizing and recalling simple information that is presented in a linear manner. They might also be good at complicated information, the kind that is organized in a manner that has many interlocking components, but still has a relatively ordered manner.
Complexity introduces a whole new realm of problem solving skills that I don’t see computers addressing soon. Complexity adds an exponentially larger amount of information combinations that become contextually bound. Computers are great at processing algothrims, but not so good at reading landscapes like humans are. We’re wired for it.
It’s one thing to ask who wrote the Études-tableaux (“study pictures”), two sets of piano pieces developed by Sergei Rachmaninoff, and quite another to explain how the piece can be used to understand the mood and spirit of the Fair that was invoked by the piece. The former is simple, the latter is more complex as it is open to multiple interpretations and contexts, which overlap creating a complex scenario.
A complex scenario cannot be broken down into component parts, because we never have perfect information that is complete. In chess or Jeopardy! we do, we can know all of the answers and possible combinations and thus can program something to respond to it. Too often, this model prevails in our decision-making in public health, where we naively presume we know everything. But that is often a fallacy or at best, a myth.
Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you’ll know tomorrow. – Agent K (played by Tommy Lee Jones), Men in Black
We once knew that health was simple, that research knowledge would always translate into use in a way that researchers intended it to be, and that the problems we faced we would solve using computers. Imagine — to follow Men in Black — what we will know tomorrow?
SEED magazine recently posted on the concept of early warning signs in complex systems that I found quite provocative and important.
Science is a creative human enterprise. Discoveries are made in the context of our creations: our models and hypotheses about how the world works. Big failures, however, can be a wake-up call about entrenched views, and nothing
produces humility or gains attention faster than an event that blindsides so many so immediately.
There are so many key points in this one phrase that are worth discussing at length.
Science is a creative enterprise . For reasons I’ve discussed elsewhere, I think that science needs to embrace its creative side more than ever and embrace design. This isn’t a universal, but if we (scientists) approached problems from the multidimensional manner in which designers typically approach them, we might create new innovations and discoveries that are different than the ones we’ve made before. Why is this important (beyond the obvious to those whose business it is to discover)? Complexity. The problems we are dealing with now more than ever are likely to be complex ones, which require different ways of approaching them and (some) different science and practice.
And as Albert Einstein famously said (or at least many people have attributed this to him — I can’t verify it, but it works nonetheless):
“We can’t solve problems by using the same kind of thinking we used when we created them.”
Discoveries are made in the context of our creations: our models and hypotheses about how the world works. In public health where I work, the dominant models remain those rooted in reductionist science. We are asked to ‘prove’ the links between certain activity and the outcomes they produce. This works relatively well in areas like sanitation, toxicology, (some) pharmaceutical or vaccination interventions, and injury prevention. It is for these reasons that the top achievements in public health, including massive increases in life expectancy and reductions in premature death took place in the 20th century. But that was then. The challenges we face now are into the realm of complexity, unless we fail to support fundamentals in public health and then we’ll have both simple and complex challenges on our hands. The point here is that our models will only take us so far without some acknowledgement of the complexity of the problems they seek to explain. The context of our creations is complexity.
Big failures, however, can be a wake-up call about entrenched views. The key term here is “entrenched views” . My colleagues Alex Jadad, Murray Enkin, Shalom Glouberman and others once had a group called the Clinamen collaborative that wrote a great piece on the problem of complexity when dealing with entrenched health care practices. Their recommendations are essentially:
1) there are no recipes for universal success,
2) pay attention to local conditions,
3) intervene small and often and then scale,
4) aim for stability first, then change.
In science, we’re failing a lot and rather than see this as a potential positive, I see more conservative approaches to science based on risk aversion. Providing support for smaller, rapid response scientific studies that are encouraged to fail will do more than these big, non-adventurous team projects that provide high-level window dressing for grant funders and avoid making anyone look bad.
…and nothing produces humility or gains attention faster than an event that blindsides so many so immediately. Humility is a word that is too often absent from my profession. I’m not talking about the kind of humility that comes from acknowledging the limitations of a scientific study or the recommendations of a report. I am speaking of true humility, where one “seeks to first understand, then to be understood”. Indeed, I would argue that we are lousy at both more often than we’re successful. Our understanding comes from a scientific perspective that holds us up as the experts. Once you’ve labelled someone or yourself an “expert” conversations immediately shift. Watch a classroom where the instructor insists on pure lecturing, being called “Dr.” and where “right” and “wrong” are regular parts of the conversation. Then watch a classroom where students learn from each other, are encouraged to share their experience and challenge the material, where the professor doesn’t push her or his titles and credentials, and where there is interaction between everyone. You’ll see a very different sense of humility from students and teachers alike.
When I encounter others on genuine, authentic and intimate level of learning I never cease to be left in awe. That comes from humility and is something I was fortunate to have modeled to me. I was once told by a retiring professor who was leaving on the day I was convocating from my undergraduate degree:
“When I was in my undergraduate, I new everything. Now that I am a retired professor, I realize I know nothing. Every year of learning serves to teach me that I know less and less.”
The SEED article goes on to point to the current problems in science in dealing with complexity and the imperative towards collaboration and cross-disciplinary engagement:
Examples of catastrophic and systemic changes have been gathering in a variety of fields, typically in specialized contexts with little cross-connection. Only recently have we begun to look for generic patterns in the web of linked causes and effects that puts disparate events into a common framework—a framework that operates on a sufficiently high level to include geologic climate shifts, epileptic seizures, market and fishery crashes, and rapid shifts from healthy ecosystems to biological deserts.
The main themes of this framework are twofold: First, they are all complex systems of interconnected and interdependent parts. Second, they are nonlinear, non-equilibrium systems that can undergo rapid and drastic state changes.
Complex systems require the kind of deep attention that science brings, the spirit of engagement and problem solving that designers offer, and a space to bring them together. With their focus on reductionist science and the lack of embrace of design, universities haven’t been the home to this kind of thinking. But things can change because, after all, this is a complex dynamic system we’re talking about.
As the holidays approach I’ve been spending an increasing amount of time looking at a field that has become my passion: design. Design is relevant to my work in part because it frequently deals with the complex, requires excellent communication, and as Herbert Simon would suggest, is all about those interest in changing existing situations into preferred ones.
Yet for all the creativity, innovation and practicality that design has I find it lacking in a certain scientific rigour that it requires to gain the widespread acceptance it deserves.
This is not to say that designers do not employ rigorous methods or that there is no science informing design. For example, architecture, a field where design is embedded and entwined, employs high levels of both rigour and science in its practice. The issue isn’t that these two concepts aren’t applied, they just aren’t applied to each other. I was heartened this week to see Dexigner profile a new pamphlet on the science of design. Although true in spirit, it wasn’t what I expected to see as it largely profiled ways to assess the quality of design projects from the perspective of design.
What if we could assess the impact of design on a larger scale, a social and human scale?
Interaction designers speak of this need to connect to the human in design work. The emergent field of social design exemplified by groups like Design 21 who aim to produce better products for social good. All of this is important, but it’s important largely because we say it is so. Rhetorical arguments are fine, but at some point design needs to confront the problem of evidence.
Does “good” design lead to better products than “bad” design?
What components of design thinking are best suited to addressing certain kinds of problems? Or are there simply problems that design thinking is just better at addressing than other ways of approaching them?
What methods of learning produce effective design thinkers? And what is effective design thinking anyway? Does it exist?
What is the comparative advantage of a design-forward approach to addressing complex problems than one where design is less articulated or not at all?
These are just some of the many questions that there seems to be little evidence in support of. A scientific approach to design might be one of the first ways of addressing this. In doing so, a scientifically-grounded design field is far more likely to garner support of decision makers who are the ones who will approve and fund the kind of projects that can have wide-scale impact. Design is making serious in-roads to fields such as business, education, and health, but it represents a niche market when it has the potential to be much larger.
Roger Martin has argued that the reliance on scientific approaches to problem solving runs counter to much of design thinking. This assumes that science is applied in a very detached, prescriptive manner, which is common, but not the only way. Micheal Gibbons and colleagues have described two forms of science, which they call Mode I and Mode II science. The first Mode is the one that most people think of when they hear the term “scientist”. It is of the (usually) lone researcher working in a lab on problems that are driven by curiosity with the aim of generating discoveries. For this reason, it is often referred to as discovery-oriented research.
Mode 2 research is designed to be problem-centred and aimed at answering questions posed by practical issues and has a strong emphasis on knowledge translation. This is an area more accustomed to the designer.
Design presents the opportunity to transcend both of these Modes into something akin to Mode 3 research, which I surmise is a blend of the abductive reasoning inherent in Roger Martin’s view of design thinking and the discovery-oriented approach that goes beyond just the problem to create value beyond the contracted issue. A design-oriented approach to the science of design would involve leveraging the creative processes of designers with some of the tools and methods accustomed to researchers in Mode 1 and 2 science. Can we not do detailed ethnographic studies looking at the process of design itself? Is there any reason why we cannot, with limits acknowledged and in appropriate contexts, attempt to do randomized controlled trials looking at certain design thinking activities and situations?
If design is to make a leap beyond niche market situations, a new field must dawn within design + science and that is the science of design and the design of science.