As a researcher in Germany, you get ambiguous signals regarding the general situation of German science: On one hand everything seems to be working quite well, there is money, there are publications, and a lot of eager Ph.D. students. On the other hand science and crisis seem like two peas in the pod. For decades there has been constant talk about a nationwide crisis in science.
Granted, Germany has not had too many Nobel prize winners in the past years, there is no German university comparable to Harvard or Oxford, and a certain amount of brain drain — especially towards the US — cannot be denied. But sometimes one fancies that this crisis is not based on actual shortcomings in German science. It rather seems to be part of German identity like Weissbier and Wagner. Possibly some of our complaints are merely an expression of our perfectionist and pessimistic German nature.
Despite of these doubts about the severity of the crisis, a plethora of New Public Management policies — ranging from more autonomy for the universities to the Excellence Initiative — have been introduced in the past years to put the crisis to an end. They all share one notion: The research system will give its best if stimulated by competition and pressure. This seems reasonable — using competition and pressure to stimulate a system is not a new idea. But is this notion really helpful for science?
The chronic crisis of Australian science
To analyze the success of New Public Management strategies in fighting a decay in scientific output, let us turn to sunnier nations, though, where the impression of crisis can not easily be linked to a stereotypical character trait. Take, for example, Australia. Down under worries about science in crisis became acute in the late 1980s. Much like in the German Excellence Initiative era, concerns about Australia’s science losing ground were not only confined to the scientific community but permeated large parts of the public. In 1988, for example, the Melbourne newspaper “The Age” published a 4-part series entitled “Science is losing its heart” painting a picture of a demoralized and demolished Australian science.
Consequently, measures were taken to stimulate Australian science and fortify it for global competition. In practice, this meant that the old funding system was replaced by competitive money distribution. In 1992, the Department of Employment, Education and Training introduced a policy that required every university to report about its publication output. Funding was distributed accordingly: Those whose publication output suggested a good performance were rewarded while those who had performed badly ended up with less money in the next years. The mind-set, with which the measures were drafted, was thus very similar to the New Public Management strategies we see in Germany these days: Piling pressure on scientists will push them to peak performance.
The pressure was effective, the scientists reacted. By the turn of the millenium Autralia’s share of scientific articles in the Science Citation Index had increased by 25%. But, suprisingly, the perception of Autralia’s science going through a crisis did not change — on the contrary, the crisis became chronic.
In 2000, Jan Thomas, then Vice-President of the Federation of Australian Scientific and Technological Societies, wrote
“There is little doubt that Australian science is in crisis. We need to dare to dream that this can change but we also need to pursue actions that may make the dreams become reality.” [1]
Scientists verifiably produced more output and still there was “little doubt that Australian science is in crisis”? The reasons for the crisis going chronic were reflected by another indicator, not by the pure aggregate publication count: Australia’s citation impact, i.e., the number of times an Australian publications got cited, had gravely plummeted, catapulting the country from 6th place among 11 OECD countries in 1988 to 10th place in 1993, and there it still lingered in 2000.
The key to understanding the occurences in Australia lies in the incentives that the policy makers had created, it lies in the direction of the pressure that had been applied to the system: The one and only thing that led to more funds for a specific university was the aggregate publication count of articles listed in the Science Citation Index. Not the quality of a certain article, only the number of published articles was the criterion that gave you more money. It was easy for scientists to play by this new rule: Become a salami slicer, cut your results in as many small pieces as possible and publish in journals with lower standards and lower impact.
Le Chatelier and Research Policy
Obviously, the policy makers in Australia had never heard about thermodynamics, let alone of Le Chatelier’s principle. In late 19th century, Henri Louis Le Chatelier, a French chemist, formulated a qualitative law about how a change in conditions, for example a change in pressure, affects a chemical system: The equilibrium shifts to counteract the change and a new equilibrium is installed. This principle – that a change of status quo in a system will provoke a backlash – is ubiquitous and has even been applied to economics.
It is just as easy to apply Le Chatelier’s principle to research policy. Scientists are (usually) clever enough to understand the rules of the funding game. If policy makers change the rules, scientists will change their behavior accordingly. And if the rule is “Maximize the number of publications to get more money”, that is exactly what they are going to do.
The Australian case, however worrying it was for Australian science, provides valuable insights into the possible reactions of a research system to pressure. Linda Butler, an Australian scientist active in the field of scientometrics, conducted a detailed study of the effects that led to the chronic Australian research crisis. She writes that in the Australian system it was
„possible for university researchers to put a dollar value (either to themselves or to their university) on their ability to place an article in an ISI journal” and concludes: “…the driving force behind the Australian trends appears to lie with the increased culture of evaluation faced by the sector. … In consequence, journal publication productivity has increased significantly in the last decade, but its impact has declined.”[2]
Diamonds are formed under pressure – is that true for research?
If a research policy can demolish science, this suggests in reverse that with a clever set of rules policy makers can push the scientific community to peak performance. And there is evidence that this clever set of rules should indeed be based on a competitive funding structure. In a careful study of the productivity of Swiss research institutions, Thomas Bolli and Frank Somogyi could show that the research productivity increases with a competitive funding policy [3].Other scientometric analyses reveal that competition in funding distribution has a positive impact on the Shanghai University Ranking of a certain research institution. In all these studies there is one problematic aspect, though: How should research productivity or research performance be measured?
Just like in the Australian case Bolli and Somogyi merely count publications and declare their aggregate number to be a performance indicator. We have learned from the Australian case that this is likely to be counterproductive as an incentive and that we should look for other possibilities.
We can turn back to crisis-driven Germany, for example. In the land of poets and thinkers a popular performance indicator for research is a very business-oriented one: The amount of acquired third-party funding. This seems to be a rational choice since the application procedure for funding certainly ensures quality and sifts the chaff from the wheat. Unfortunately it is not that easy. In 2009, Ulrich Schmoch and Torben Schubert from the Karlsruhe Institute of Technology, published a study showing that increasing the share of third-party funding does not always enhance productivity in research [4] .The plot of the share of third-party funded research against the productivity rather takes on an inverse U-shape. This means that there seems to be a a point of saturation, from which on a larger share of third-party funded research decreases research productivity rather than increasing it. Schmoch and Schubert wisely conclude that „indicator sets should strive for sustainable incentives, which can be guaranteed if the sets are broad enough.”
This clever set of rules that gets the pressure just right to form the scientific diamonds thus hinges on the appropriate indicator for research quality. And, sadly, the definition of the latter is not at all straightforward.
The Quality Myth
Even if we had a good set of indicators, the question remains whether this would guarantee the best researchers to get the grants. Most more or less frustrated researchers nowadays will undoubtedly confirm the first piece of evidence against this quality-only assumption: It seems inevitable that the rich scientists get richer while the poor get poorer. If only the quality of current research were of importance we would expect some balancing between rich and poor researchers every once in a while. This puzzling effect has a name in sociology. It is called the Matthew effect after the same Matthew that wrote down the Gospel. In the Parable of the Talents , we can read: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.”
The Matthew effect seems to get amplified by competitive research funding: In 1984, before the introduction of New Public Management to German science policy, less than 10% of German professors completely agreed with the statement “It is always the same people who acquire the funds for their research”. In 2010, i.e., in the middle of the German Excellence Initiative era, more than 20% agreed [5].Of course, these numbers only show personal opinions and perhaps reflect the hurt pride of unsuccessful researchers. But there is other evidence for the Matthew effect playing a crucial role in funding acquisition.
In a 2006 study with the meaningful title “The Quality Myth: Promoting and hindering conditions for acquiring research funds”, Grit Laudel, another Australian researcher in scientometrics, gives some answers to why the quality-only assumption does not necessarily hold and why the money might always pile up in the same hands.
In the study, 45 German and 21 Australian experimental physicists were interviewed about the difficulties of funding acquisition. In abundance, Laudel identifies non-quality related factors that influence the outcome of a researchers’ grant application. Among them are know-how in fundraising and the availability of high-quality collaborators, but regarding the Matthew effect the most important are probably a continuous research trail of the topic in question and the amount and significance of prior research. One of the interviewed researchers nails the latter down to be a chicken and egg problem. His or her statement reads like this:
“… if you already have lots of publications in an area then it’s easier to get it funded” .
Another German scientists says:
“For completely new things, new ideas you hardly get money. … If you intend to start something new, you need to do some research to show that it works.”[6]
Consequently, young, unknown researchers, who could introduce new and fresh ideas to science, nowadays seem to be under the double amount of pressure: They have to acquire funds to get a better reputation and a better ranking. But they somehow need to get lots of research done before their funding applications will have any prospects of success — an unresolved paradox.
Pressure towards low-risk, mainstream research
Regarding Le Chatelier’s principle, Laudel’s study shows another interesting feature: A competitive research funding system not only influences the scientists’ publication behavior. Among the adaptation strategies that scientists apply for obtaining more funds is also the choice of research topic. A German physicist says:
“The reviewers of the Deutsche Forschungsgemeinschaft (German Research Society) are very reluctant to give you the freedom to just try something”.[6]
Indeed one adaptation strategy of scientists to more competition is to conduct low-risk research often selecting predetermined and “cheap” topics.
Laudel concludes that:
“… it would make sense to have … mechanisms to counteract the pressure of external funds towards mainstream, low-risk, application-oriented research”.[6]
Recently, the European Union has picked up this idea and especially funds so called “high-risk-high-gain” research in a venture approach for funding. The 2012 ERC Advanced Grants Call specifically encourages ground breaking, high-risk research. Maybe this is a first step towards establishing the right countermechanisms to really create diamonds in European science and humanities.
Politics can’t force breakthroughs
Looking at science history, though, there is much room for scepticism towards viable planability of breakthroughs in research. Very often the brilliancy of a new hypothesis or a new observation only becomes visible in restrospective, which is very unfortunate for New Public Management. Take, for example, Alfred Wegener, father of the theory of plate tectonics. Without doubt his ideas were ground-breaking, maybe a bit too ground-breaking: A large part of the scientific community thought of him as a nutcase, a meteorologist trying to get merits with some crazy ideas about geology. His publication record cumulated in one important book on the matter that hardly got cited in the first few decades of its existence. It took until the 1960s for his risqu? theory to become acceptable: The U.S. Navy got interested in locating submarines and thus enough underwater studies could be conducted to confirm Wegener’s theories.
There are many prominent examples of at first unrecognized theories that later led to paradigm shifts in science: Hans Meerwein, for instance, conducted groundbreaking work in the 1920s postulating carbocations to be reactive intermediates in organic chemistry. His views were accompanied by great scepticism, which discouraged him to carry on with this research topic. In 1994, George Olah received the Nobel Prize for it. Or Mendelian genetics, which were published by Mendel in 1866 but only got accepted after 1915 when the chromosome theory of genetics was formulated.
These stories reflect how difficult it is to thoroughly and accurately judge the quality of research. How can policy makers estimate the value of a new scientific idea if even the judgements of a scientist’s peers can be completely wrong? The theories and discoveries that really change the way we think about the world are hard to detect by standard scientometric indicators and are likely to come from unexpected directions.
Venture capital for creativity
In summary, it seems extremely difficult to influence such a complex system as science with all its players and communication paths in just the right way to push it towards its optimum. A little competition certainly stimulates the research system. On the other hand the pressure applied might cause it to lean towards a completely undesired direction since the backlashes that a certain policy will cause are not always predictable. And at the end it depends on courageous individuals to formulate revolutionary and paradigm shifting ideas. Policy makers should acknowledge the special and unique feature of science and take the lack of plannability into account. Giving room and venture capital for courage, creativity, and risk might be of greater benefit to top-notch science than restriction, pressure, and evaluation. The real diamonds in science will be formed and recognized when their time has come.
— Leonie Anna Mueck
References
[1] http://www.wisenet-australia.org/issue54/janthomas.htm last accessed in June 2012
[2] L. Butler, “Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts”, Research Policy 32 (2003) 143–155.
[3] T. Bolli, F. Somogyi, “Do competitively acquired funds induce universities to increase productivity?”, Research Policy 40 (2011) 136–147.
[4] U. Schmoch, T. Schubert, “Sustainability of incentives for excellent research – The German case”, Scientometrics 81 (2009) 195–218.
[5] S. Hornbostel, “Resonanzkatastrophen, Eigenschwingungen, harmonische und chaotische Bewegungen” in iFQ-Working Paper No. 9 , November 2011.
[6] Grit Laudel, “The ‘quality myth’: Promoting and hindering conditions for acquiring research funds”, Higher Education 52 (2006) 375–403.