Innovation grants are a misnomer, signifying one of the greatest problems with academic science and the quest to create novel solutions to important problems.
Yesterday the Canadian Cancer Society Research Institute (the research arm of the largest charitable agency that supports cancer programming in Canada) announced its new, revamped lineup of grant funded programs to be launched within the coming months. Among the first of these new programs is one called Innovation Grants (PDF)while another is called Impact Grants (PDF). In fact, both of these new program announcements include the definition of each of the key terms in their program call:
Innovation: The action of innovating; the introduction of novelties; the alteration of what is
established by the introduction of new elements or forms.
-Oxford English Dictionary
Impact: the action of one object coming forcibly into contact with another; a marked effect or
-Oxford English Dictionary
This is impressive in how they can clearly and distinctly linked the definition of the word to the program call. Why? Because too often grant program calls and their expression in reality are too often separate. I once served on a grant panel that was looking at grants aimed at ensuring quality knowledge translation only to find that most reviewers were comfortable with things like “prepare academic manuscript based on research” and “present findings at major conference” to be acceptable knowledge translation goals by themselves. I was appalled.
Yet, I can’t help but think, despite the good intentions here, that these new programs are going to follow in similar footsteps. The problem is not the funder, but rather the way that funding is granted and the reliance on the system to change itself.
The innovation grants are designed to :
support unconventional concepts, approaches or methodologies to address problems in cancer research. Innovation projects will include elements of creativity, curiosity,
investigation, exploration and opportunity. Successful projects may involve higher risk ideas, but will
have the potential for “high reward”, i.e. to significantly impact our understanding of cancer and
generate new possibilities to combat the disease by introducing novel ideas into use or practice
The mechanism by which these grants are to be decided are, as much as I can tell, by peer review. It is for that reason alone that we can feel some level of confidence that these grants will fail outright. Peer review is designed to judge the quality of content by what is and has been, not by what could be. “Innovation” is about doing things differently, often markedly so. Scientific panels are about supporting incrementalism, particularly in the social and behavioural sciences.
Innovation is also about risk and the potential for failure. These are two words that are highly problematic in present day academic science. Firstly, if you’re a junior scientist, you may be working desperately to fund yourself and your research (and research team). The price of failure is high. If you’re not able to publish meaningfully off your research, you will have a hard time getting your next grant and keeping yourself afloat. In public health sciences for example, CLTA (contract limited-term appointments) are dominant.
I should know as that’s the position I hold.
But the tenured faculty don’t have it much better. While they are more secure, their research teams, graduate student trainees who rely on projects to develop their skills, and the ability to develop coherent programs of research are at risk every time there is an unsuccessful grant. There are real opportunity costs to pursuing risky ventures so many don’t do it
As one who has tried to be innovative with his work and having the privilige (or curse, depending on the perspective) of having interests that have fallen into the innovation category (or “trendy” category to the cynic), I’ve seen how innovation is treated and it’s not good. Innovation programs tend to split committees. I’ve had too many comments returned to me that have some variant on “this is amazing, potentially leading edge research!” alongside “the use of non-conventional methods makes this suspect” or “I don’t understand what this is supposed to do“. As one who had to endure years of questions like “I don’t see how this Internet thing has anything to do with health” in the early days of the eHealth this kind of line of questioning is familiar to me.
I point this out not to gripe, but to illustrate how innovation can get treated in academia. When you get feedback like I described it is very hard to critically assess the true merits of the proposal for improvement. Did people not understand an idea because it could have been written more clearly or did they just not “get” the innovation? Were those who were excited just caught up in the “newness” or were they really in sync with my vision? As a scientist, I don’t know the answer and can’t improve because the feedback is so contradictory.
And because innovators often create, develop or define fields of inquiry or practice that does not exist or is in development there are few if any adequate and available reviewers with the appropriate background on the topic.
In academia, we rely on tradition, on evidence (which is part of tradition, what has been done before), not on strategic foresight and innovation to guide us. That is a problem in itself. Universities haven’t survived hundreds of years by being risky, they have because they were safe (in spite of the occasional radical shift here and there). With complex social problems and the challenges posed by things like cancer, something risky is needed because the traditional ways of doing things have either been exhausted or are no longer producing the necessary health gains. Academics just aren’t positioned to embrace this risk unless the system changes — with them helping drive that change — to support innovation and not just talk about it.
Until that happens, the opportunities to live up to the definition of innovation posed about to create the impact described above will be limited indeed.