Measuring Success in Science
by Alan Saltiel
Mary Sue Coleman Director of the Life Sciences Institute
Success is the child of audacity.
In this era of fiscal challenges and economic uncertainty, where society is compelled to make tough-minded choices between which endeavors to support, there is a fervor to find so-called objective (often number-based) metrics to define success. Science is not exempt from this pressure. The ambitious STAR metrics initiative underway at NIH, which is attempting to quantify the return on investment to the American taxpayer from the public funding of biomedical research, is a good example of the sort of effort underway to quantify research success on the national level.
Within the academic science community and within individual universities and colleges, we also struggle with identifying the right metrics to measure science success. In this issue of explore LSI we report on a number of statistics about the LSI, including our impressive track record in garnering external funding support for our research. While I am very proud of our achievements in grant support, I am somewhat hesitant about presenting them in this manner, because it risks the impression that this is how we measure success at LSI. It is not. In fact, the trend to equate excellence in science with the amount of research funding—presumably to incent more grant getting—is a dangerous one that needs to be actively resisted by anyone passionate about discovery.
How much research funding a laboratory has is dependent on many factors, none of which relate to the quality or impact of the research underway. It can reflect the career stage of the faculty member (it peaks somewhere in the mid-late era); the area of research (what is the trend du jour?); the kind of research being conducted (some studies are more expensive than others); and how much effort a faculty member devotes to other activities such as teaching or administration. We are keenly aware of this in the LSI, where our multi-disciplinary nature means we have many different types of scientists and scientific research. Comparing our faculty on this measure would make no sense at all. It also sends the wrong message, particularly to our junior scientists who are more susceptible to externally imposed measures of success.
We want to foster the kind of science that moves a field forward, creates novel ways of understanding, and gives insight into the largest and most complex problems in bioscience. In other words, research that has an impact. Impact simply does not correlate well with the amount of external funding a lab receives. And the error of this comparison is only magnified when the measure is revenue dollars per square foot, a metric currently in vogue in many research-based organizations. How much space a lab occupies has to do with the availability of space, the space needed for that particular research, and how far a dollar goes in that area of research—all factors that are completely unrelated to the impact of the research.
Do we want to see that our faculty are successful in getting outside funding for their research? Absolutely. External funding pays for the science. Our scientists devote a great deal of their time scrambling for this funding, not for the glory of piling up the dollars, but rather to support the execution of their best ideas. Funding is necessary, but not sufficient for success, and equating funding levels with success out of our legitimate (and sometimes desperate) need for resources is a mistake. When measuring and celebrating the success of our scientists—and the potential return on investment—we need to keep our eyes on the real prize: discovery.