June 8th, 2011, by Prof. Dr. Stefan Hornbostel, Institute for Research Information and Quality Assurance
Interview with Prof. Hornbostel
What are the strengths and weaknesses of the German scientific system? German science has helped to resolve many questions, but has long been unable to assess its own performance. To fill this conspicious hole, the Deutsche Forschungsgemeinschaft established in 2005 the Institute for Research Evaluation and Quality Assurance (iFQ) and appointed Stefan Hornbostel as founding director. A sociologist by training, he currently holds a professorship for research evaluation at the Humboldt University in Berlin as well.
During his visit to Mainz in June 2011, we talked to Prof. Hornbostel about his work at the iFQ, the German scientific system, and the changes it underwent in recent years.
JUnQ: Professor Hornbostel, you are the director of the Institute for Research Information and Quality Assurance (iFQ – Institut fuer Forschungsinformation und Qualitaetssicherung). What is the purpose of this institute?
Hornbostel: Originally, the iFQ was the reply to an evaluation of the Max Planck Society (MPG – Max-Planck-Gesellschaft) and the German Research Foundation (DFG – Deutsche Forschungsgemeinschaft): International reviewers had the impression that we barely know how the German scientific system works, what its incentives and deficits are, and what impact funding programs and political actions have. The purpose of the iFQ is to shed light on these questions.
JUnQ: Are there institutions in other countries that served as a model for the iFQ?
Hornbostel: Yes, there are. Germany is a latecomer in that regard. In the UK and the US, but also in the Netherlands, “Science of Science” arose much earlier. Its tradition goes back to the 1920s and 1930s. Yet in Germany, the idea was of little importance in the following years, while it was pursued more systematically in the US. In Germany, a lot of single disciplines like history of science, philosophy of science, and sociology of science are involved in this field, but an integrated approach that also includes empirical methods is relatively new to Germany.
JUnQ: Can you outline what methods you use?
Hornbostel: We mostly use standard methods from social sciences like surveys, interviews, and text analyses. But some tools are new and specifically developed for our field, for example bibliometrics or the analysis of funding programs.
JUnQ: Do you target all disciplines, i.e. natural sciences, social sciences, and the humanities?
Hornbostel: Yes, we do. This is related to the fact that, when it was founded, the iFQ was strongly embedded in the DFG. From an international perspective, it is rather an exception that one single institution – the DFG – is in charge of funding all disciplines. Most countries distinguish at least between large scientific fields, but since there is this special German tradition, we deal with all disciplines, including the humanities. Not every method can be applied to all areas due to different scientific cultures, however, we do not want to restrict our studies to natural sciences, but we try to examine all disciplines.
JUnQ: Many scientists are sceptical towards evaluations in general as they consider them to be time-consuming and useless. Have you encountered such objections? How do you cope with them?
Hornbostel: This objection is not only raised by natural scientists, but also in the humanities. The introduction of quality and performance measures in science and the public discourse about it has led to enormous changes in the German scientific system, though. As evaluations are nowadays carried out in a routine manner, we should legitimately ask: Is the effort still in due proportion to its benefit? It may well be that evaluation efforts can be reduced once a certain level of performance is reached without lowering the results’ quality. Our methods need to evolve in coevolution with their target areas, they need to be flexible
so that they can be adapted to changes in science.
JUnQ: The question is often raised whether it is possible at all to “measure” science. Do you think this is possible?
Hornbostel: We can refer to a great example. Derek John de Solla Price, one of the founding fathers of bibliometrics, once began a lecture with the question: Why should we not apply scientific methods to science itself? This is an obvious question: Why should the sector that is most involved in the generation of methodologically controlled knowledge exclude itself from its studies? We also need to bear in mind that science is nowadays in completely different shape than it was one hundred years ago. The amount of results, the cost, and the complexity of scientific infrastructures has dramatically increased. It is simply necessary to think about a meaningful concept how to operationalize and measure important quantities like advancement of knowledge but also efficiency.
JUnQ: In science, quantities like the h-Index and the Journal Impact Factor are often used to assess scientific performance. What do you think of these numbers?
Hornbostel: I have a low opinion of the h-Index. This index tries to condense several pieces of information that were previously represented by different indices. Such attempts to form a “super-index” usually cause more problems than benefits. The h-Index suffers from various problems that may lead to misjudgements if one is not aware that other factors affect it, apart from scientific performance. The Journal Impact Factor was invented to help librarians with the decision what journals to subscribe or unsubscribe to. While it served that purpose well, it is nowadays misused to assess individual scientists’ work, which cannot be justified at all as there is a high danger of wrong conclusions. In short, this indicator is helpful for certain application fields, but it is also misused, thereby leading to problematic results.
JUnQ: The iFQ also surveys scientists about their satisfaction with their personal situation and the shape, in which science is. Can you outline how the German scientific community perceives itself?
Hornbostel: We just completed an extensive survey among German professors, which included several issues of current science policy. I can summarize the results roughly as follows: German scientists from nearly all disciplines consider themselves to be competitive on an international level and a majority thinks that the funding conditions for research projects are very good. German scientists are in general not averse to rivalry and competition for public attention, money, and publications. However, we noticed a certain weariness concerning the numerous application-based funding procedures. Throughout all disciplines, scientists criticize the increasing scarcity of regular funding.
JUnQ: Has this perception changed over the years?
Hornbostel: This is not easy to determine as there were not many systematic surveys in earlier years. Fortunately, the Allensbach Institute conducted surveys among German faculty members in the 1970s and 1980s. In comparison to these data, we clearly observed that the profound changes in the German scientific system since the beginning of the 1990s have left their marks in the scientists’ perceptions. For example, we noticed a shift in perception regarding the question if money always accumulates in the hands of the same people. In this respect, scientists are now more critical than they were thirty years ago. Furthermore, today’s faculty members assign a more important role to impression management than their colleagues did in the 1970s.
JUnQ: What caused these changes?
Hornbostel: It is obvious that they result from an increasing focus on performance and competition in the German scientific system. Since the 1970s, the gap between regular funding and external funding has widened significantly. Today, it is almost impossible to do research without external application-based funding, while in the 1970s the level of regular funding was significantly higher. In addition, the performance of today’s scientists is constantly monitored on many levels: By the universities, by the federal states, by the German Council of Science and Humanities (Wissenschaftsrat), and by external rankings. A modern scientist’s life is different than it was in the past and the focus on competition is much stronger.
JUnQ: A key issue in the public debate regarding competition in science is the Excellence Initiative.
Hornbostel: The iFQ is evaluating the Excellence Initiative and has collected a lot of data on this issue. The picture we get is a little paradoxical. On the one hand, the Excellence Initiative was one of the great endeavors in recent years. In many interviews at different universities, we always hear the same message that things got going, not only due to additional money, but because of the novel idea to compete for prestigious titles that are perceived by the public. Outdated structures were abandoned, new structures emerged that often crossed sectional boundaries, and many novel concepts were given a try. In short, the Excellence
Initiative generated momentum in the German universities. On the other hand, we surveyed professors on the question whether this concept of funding is suited to push research in Germany forward. In this regard, the impression is rather negative. In almost no scientific field, the Excellence Initiative is considered to be a promising funding concept. Maybe, the truth lies in the middle. That is, the Excellence Initiative is unsuitable as a permanent institution, but in its historical setting, it was important and helpful. This ranges from interdisciplinary contacts and new forms of organization to enhanced recruiting procedures and novel forms of support for young scientists. For example, graduate schools have emerged. Such new concepts did not only lead to changes in very short time periods, but they also created enthusiasm for experiments. This unideological eagerness for experiment is something new to the German scientific system and something very positive.
JUnQ: Professor Hornbostel, I thank you for the interview.
Listen online (German):