For the last two days I’ve been attending the Science of Team Science conference at Northwestern University in Chicago. It is what I can only imagine is the closest thing to the Super Bowl or World Cup of team science (minus the colourful jerseys, rampant commercialism, and hooligans — although that would have made quite an impact as academic conferences go).
The presentations over the first day and a half have illustrated how far we have come in just a few years. In 2008 a similar conference was held near the NIH campus in Bethesda, MD. That event, sponsored by the US National Cancer Institute, was an attempt to raise the profile of team science by highlighting the theories and rationale underlying why the idea of collaboration, networks and multi-investigator applied research might be a good idea. The conference was aimed at sparking interest in the phenomenon of collaborative team research for health and resulted in a special issue of the American Journal of Preventive Medicine highlighting some of the central ideas.
Although there are many of the same people attending this conference as there was two years ago, the content and tenor of the conversation is markedly different. The biggest difference is that the idea of team science no longer needs to be sold (at least, to the audience in the room). There is wide agreement by attendees that team science is a good thing for a certain set of problems (particularly wicked ones) and that it will not replace normal science, rather complement it or fill in gaps that standard research models leave.
There is also much contention. Although, unlike other conferences, this contention is less about a clash between established bodies of knowledge, rather it is based on uncertainty over the direction that team science is going and the best routes to get there, wherever “there” is. Stephanie Jo Kent, a communications researcher from UMass, has been live blogging at the event (and encouraging the audience to join in — follow #teamsci10 on Twitter or Stephanie @stephjoke) and wrote a thoughtful summary of the first day on her blog. Here she points to one of the biggest challenges that the emergent field of team science and the conference attendees will need to address: Getting beyond “the what” of team science.
Because everyone has their own thing that they’re into, whether its research or administration or whatever, we would have to come up with “a meta-thing” as a goal or aim that everyone – or at least a solid cadre of us – could get behind. What if we decided to answer the process question? Instead of focusing on, “What is ‘the what’ of team science?” which takes as its mission connecting the science; we propose an examination of self-reflective case studies in order to identify “what works” and thus be able to explain and train people in the skills and techniques of effective team science.
This issue of training is an important one. My own research with the Research on Academic Research (RoAR) project has found that many scientists working in team science settings don’t know how to do it when they start out. We scientists are rarely trained in collaboration and teamwork, and those that are, are not in science.
It will be interesting to see where things go from here. I suggest following us all on Twitter to see.
Are we creating the type of innovators that suit the digital economy? That respond to any opportunity, not just the ones that we plan for? There’s a lot of thinking out there that suggests we’re not.
I recently read an article on considerations around how to train for innovation in the December 2009 issue of the Harvard Business Review. When most people speak of innovation, it seems as if they do with an idea that there are some key steps or tricks to being an innovator and that is about it. But what Gina Colarelli O’Connor, Andrew Corbett, and Ron Pierantozzi argue is that there are three types of innovators that all have three stages that build on each other depending on where they are in their career. This is an important and interesting idea and a shift from the traditional mindset.
The authors state:
Companies must first understand that breakthrough innovation consists of three phases:
Discovery: Creating or identifying high-impact market opportunities.
Incubation: Experimenting with technology and business concepts to design a viable model for a new business.
Acceleration: Developing a business until it can stand on its own.
To address this, they suggest training people to match these distinct stages and phases:
Each phase lends itself to distinct career paths, as well. The bench scientist, for instance, may eventually want to be involved in policy discussions about emerging technologies and how they may influence the company’s future. The incubator may want to pursue a technical path – managing larger, longer-term projects – or to manage a portfolio of emerging businesses. And the accelerating manager may want to stay with the business as it grows, take on a leadership role.
Rather than develop those paths, however, many firms assume that an individual will be promoted along with a project as it grows from discovery through to acceleration. In reality, individuals with that breadth of skill sets are extremely rare. In other words, companies have essentially been setting their innovators up to fail.
Johnston said the country’s university system must shoulder part of the blame for the lag in Canada’s technological mindset. The schools haven’t done enough to train students to work smarter, he said, which means that few Canadian companies succeed based on innovation. Of Canada’s biggest companies, most are banks, while only BlackBerry maker Research In Motion — also based in Waterloo — has succeeded internationally, mainly because it has focused on innovation.
Perhaps the problem is that we use the term innovation so loosely that graduates fail to recognize where innovations are or how to move them along. Or, as Colarelli O’Connor and colleagues point out, they are trained for the wrong set of skills for the right kind of innovation stage.
The black hole visualization above might be more than just a depiction of something in the outer regions of space, it could be an apt metaphor for what is taking place in our research institutes and universities as far as young scientists are concerned. As a young scientist and innovator(?) (* I’ll come back to that term later) this is a deeply personal issue so keep this in mind as you read on. I am also an educator and responsible for training a new group of scientists, health practitioners, and social innovators in public health — our the future discovery agents — and it is in this latter role that I am most upset and passionate about the issue that is befalling scientific research: a systematic strangling of opportunities for young people.
Jonah Lehrer recently explored this topic in his column in the Wall Street Journal and on his science blog (“The Frontal Cortex“) and I’m very glad he did. Lehrer points to the widening gap between those who have funding and those that do not (the rich getting richer) and how this trend is hurting researchers at the very beginning of their career – the time they are most likely to make breakthrough discoveries.
In 1980, the largest share of grants from the National Institutes of Health (NIH) went to scientists in their late 30s. By 2006 the curve had been shifted sharply to the right, with the highest proportion of grants going to scientists in their late 40s. This shift came largely at the expense of America’s youngest scientists. In 1980, researchers between the ages of 31 and 33 received nearly 10% of all grants; by 2006 they accounted for approximately 1%. And the trend shows no signs of abating: In 2007, the most recent year available, there were more grants to 70-year-old researchers than there were to researchers under the age of 30.
My personal experience in Canada is that this pattern is not much different. As the available grant opportunities decrease or stagnate, the pie continues to remain relatively stable in size while the number of people wanting to eat from it grows. And to compound the problem, more researchers are staying longer in the field. As Lehrer notes: if you’re 70 or older you’re more likely to get an NIH grant than someone under the age of 30.What does that say to young scholars?
In order to be more competitive in grant competitions and for jobs, students are considering post-docs and spending a longer time training, paying money into an education and deferring potential employment-related income, with the hope that it will pay off. Recent National Science Foundation data shows this trend:
New doctorate recipients are increasingly likely to take postdocs, and that is evident in the 2006 SDR data: among all SEH doctorate recipients, 38% had held a postdoc at some point in their careers (table 1). More recent cohorts were more likely than earlier ones to have held a postdoc: 45% of those earning the doctorate within the last 5 years compared with 31% of those who earned the doctorate more than 25 years ago.
If that payoff is a stable research position and the ability to start up your own research group then they are likely to be disappointed, as Lehrer notes:
The age distribution of NIH grants has significant implications for American science. It has become much harder for young scientists to establish their own labs. According to the latest survey from the National Science Foundation, only 26% of scientists hold a tenure-track academic position within six years of receiving their Ph.D.
Jason Hoyt discusses this further and illustrates the trends in grant funding and asks whether we have too many PhD’s in the first place given this trend? Good question. But perhaps a better question is whether we’re killing off innovation, discovery and creativity by stifling the ability for people in their most creative years to do the work in the first place? And are we discouraging the next generation of young scientific leaders by making life for early career researchers so difficult that talented, creative people self-select out of the applicant pool for graduate school and faculty posts?
The issue of tenure mentioned above is not a moot point. To those outside the academy, the concept of tenure surely seems something anachronistic in uncertain economic times and I certainly appreciate that there will be little sympathy for non-tenured scientists and professors among the public. However, it is worth pointing out that the science and innovation done in research has a time horizon that is not the same as other jobs. A grant takes months to prepare, months to adjudicate (a recent grant I applied for had a deadline of October 15th and the decision will not be rendered for the competition until April), and then requires anything between a year to five to do the research (if you’re so lucky to have funding last more than a year or two), and then at least a year to publish your findings. This rests on the assumption that you get funded the first time and that your manuscripts get accepted the first time around. Neither of these are reasonable assumptions. Yet, grants and publications are the #1 things used to assess success. And if people think researchers get paid too much, consider what 14 years of post-secondary education and training gets you according to Hoyt:
With a PhD, a postdoc can expect to start, at most, US $42K a year in academia and $52K in industry.
I made $36.5K as a post doc and as an assistant professor at a major research university, I make less on an hourly basis than all but my part-time data entry clerk (this includes my graduate student research assistants) considering the number of hours I have to work each week to get everything accomplished. My reasons for doing this work isn’t about money, but at some point for most young researchers who have a mountain of student loan debt from a decade and a half of accumulated education and opportunity costs, it has to be.
The tenure gap also points to another shadow in the system and that is the idea that scientists raise their own salary. Thus, scientists and professors are spending an ever-increasing amount of time writing grants to pay themselves so that there is someone to do the research. In the United States, there exists mechanisms to put salaries into grants, however in Canada this is a relatively rare occurrance. The assumption is that universities and research institutes pay salaries and funders pay for research, which simply doesn’t hold true. Imagine having most of your creative talent spending 1/3 of their time applying for funding to support them in…writing more grants to support them in writing more grants. When does the innovation happen? And when are scientists supposed to be doing all of that knowledge translation stuff that they’re increasingly expected to do?
Another problem for young researchers (and tied to the tenure or long-term contract issue) is that the value of an innovation is only really seen in hindsight, therefore any profession based on innovation requires methods of promotion, retention and acknowledgment that somewhat fits this horizon (even if that is imperfect, particularly in basic sciences where the innovation isn’t always obvious for many years). How can you reasonably judge someone’s contribution based on a short period of time?
Here we have a system that espouses language of innovation without any mechanism to support it or worse, an entrenched pattern of behaviour designed to prohibit it. How much sense does that make?