Vol. 2, Issue 2, July 2012

Science under Pressure – Applying Le Chatelier’s Principle to Science Policy

 Essays, news, and open questions, Vol. 2, Issue 2, July 2012  Comments Off on Science under Pressure – Applying Le Chatelier’s Principle to Science Policy
Oct 092012

As a researcher in Germany, you get ambiguous signals regarding the general situation of German science: On one hand everything seems to be working quite well, there is money, there are publications, and a lot of eager Ph.D. students. On the other hand  science and crisis seem like two peas in the pod.  For decades there has been constant talk about  a nationwide crisis in science.

Granted, Germany has not had too many Nobel prize winners in the past years, there is no German university comparable to Harvard or Oxford, and a certain amount of   brain drain  — especially towards the US — cannot be denied. But sometimes one fancies that this crisis is not based on actual shortcomings in German science. It rather seems to be part of German identity like Weissbier and Wagner. Possibly some of our complaints are merely an expression of our perfectionist and pessimistic German nature.

Despite of these doubts about the severity of the crisis, a plethora of   New Public Management  policies — ranging from more autonomy for the universities to the   Excellence Initiative  — have been introduced in the past years to put the crisis to an end. They all share one notion: The research system will give its best if stimulated by competition and pressure.  This seems reasonable — using competition and pressure to stimulate a system is not a new idea. But is this notion really helpful for science?

The chronic crisis of Australian science

To analyze the success of   New Public Management  strategies in fighting a decay in scientific output, let us turn to sunnier nations, though, where the impression of crisis can not easily be linked to a stereotypical character trait. Take, for example, Australia. Down under worries about science in crisis became acute in the late 1980s. Much like in the German Excellence Initiative era, concerns about Australia’s science losing ground were not only confined to the scientific community  but permeated large parts of the public. In 1988, for example, the Melbourne newspaper  “The Age”  published a 4-part series entitled “Science is losing its heart” painting a picture of a demoralized and demolished Australian science.

Consequently, measures were taken to stimulate Australian science and fortify it for global competition. In practice, this meant that the old funding system was replaced by competitive money distribution.  In 1992, the Department of Employment, Education and Training introduced a policy that required every university to report about its publication output. Funding was distributed accordingly: Those whose publication output suggested a good performance were rewarded while those who had performed badly ended up with less money in the next years. The mind-set, with which the measures were drafted, was thus very similar to the   New Public Management  strategies we see in Germany these days: Piling pressure on scientists will push them to peak performance.

The pressure was effective, the scientists reacted. By the turn of the millenium Autralia’s share of scientific articles in the Science Citation Index had increased by 25%. But, suprisingly, the perception of Autralia’s science going through a crisis did not change — on the contrary, the crisis became chronic.

In 2000, Jan Thomas, then Vice-President of the Federation of Australian Scientific and Technological Societies, wrote

“There is little doubt that Australian science is in crisis. We need to dare to dream that this can change but we also need to pursue actions that may make the dreams become reality.” [1]

Scientists verifiably produced more output and still there was “little doubt that Australian science is in crisis”? The reasons for the crisis going chronic were reflected by another indicator, not by the pure aggregate publication count: Australia’s citation impact, i.e., the number of times an Australian publications got cited, had gravely plummeted, catapulting the country from 6th place among 11 OECD countries in 1988 to 10th place in 1993, and there it still lingered in 2000.

The key to understanding the occurences in Australia lies in the incentives that the policy makers had created, it lies in the direction of the pressure that had been applied to the system: The one and only thing that led to more funds for a specific university was the aggregate publication count of articles listed in the Science Citation Index. Not the quality of a certain article, only the number of published articles was the criterion that gave you more money. It was easy for scientists to play by this new rule: Become a salami slicer, cut your results in as many small pieces as possible and publish in journals with lower standards and lower impact.

Le Chatelier and Research Policy

Obviously, the policy makers in Australia had never heard about thermodynamics, let alone of Le Chatelier’s principle.  In late 19th century, Henri Louis Le Chatelier, a French chemist,  formulated a qualitative law about how a change in conditions, for example a change in pressure, affects a chemical system: The equilibrium shifts to counteract the change and a new equilibrium is installed. This principle – that a change of status quo in a system will provoke a backlash – is ubiquitous and has even been applied to economics.

It is just as easy to apply Le Chatelier’s principle to research policy. Scientists are (usually) clever enough to understand the rules of the funding game. If policy makers change the rules, scientists will change their behavior accordingly. And if the rule is “Maximize the number of publications to get more money”, that is exactly what they are going to do.

The Australian case, however worrying it was for Australian science, provides valuable insights into the possible reactions of a research system to pressure. Linda Butler, an Australian scientist active in the field of scientometrics, conducted a detailed study of the effects that led to the chronic Australian research crisis. She writes that in the Australian system it was

„possible for university researchers to put a dollar value (either to themselves or to their university) on their ability to place an article in an ISI journal” and concludes: “…the driving force behind the Australian trends appears to lie with the increased culture of evaluation faced by the sector. … In consequence, journal publication productivity has increased significantly in the last decade, but its impact has declined.”[2]

Diamonds are formed under pressure – is that true for research?

If a research policy can demolish science, this suggests in reverse that with a clever set of rules policy makers can push the scientific community to peak performance.  And there is evidence that this clever set of rules should indeed be based on a competitive funding structure. In a careful study of the productivity of Swiss research institutions, Thomas Bolli and Frank Somogyi could show that the research productivity increases with a competitive funding policy [3].Other scientometric analyses reveal that competition in funding distribution has a positive impact on the Shanghai University Ranking of a certain research institution. In all these studies there is one problematic aspect, though: How should research   productivity  or research   performance  be measured?

Just like in the Australian case Bolli and Somogyi merely count publications and declare their aggregate number to be a performance indicator. We have learned from the Australian case that this is likely to be counterproductive as an incentive and that we should look for other possibilities.

We can turn back to crisis-driven Germany, for example. In the land of poets and thinkers a popular performance indicator for research is a very business-oriented one: The amount of acquired third-party funding. This seems to be a rational choice since the application procedure for funding certainly ensures quality and sifts the chaff from the wheat. Unfortunately it is not that easy. In 2009, Ulrich Schmoch and Torben Schubert from the Karlsruhe Institute of Technology, published a study showing that increasing the share of third-party funding does not always enhance productivity in research [4] .The plot of the share of third-party funded research against the productivity rather takes on an inverse U-shape. This means that there seems to be a a point of saturation, from which on a larger share of third-party funded research decreases research productivity rather than increasing it. Schmoch and Schubert wisely conclude that „indicator sets should strive for sustainable incentives, which can be guaranteed if the sets are broad enough.”

This clever set of rules that gets the pressure just right to form the scientific diamonds thus hinges on the appropriate indicator for research quality. And, sadly, the definition of the latter is not at all straightforward.

The Quality Myth

Even if we had a good set of indicators, the question remains whether this would guarantee the best researchers to get the grants. Most more or less frustrated researchers nowadays will undoubtedly confirm the first piece of evidence against this   quality-only  assumption:  It seems inevitable that the rich scientists get richer while the poor get poorer. If only the quality of current research were of importance we would expect some balancing between rich and poor researchers every once in a while. This puzzling effect has a name in sociology. It is called the Matthew effect  after the same Matthew that wrote down the Gospel. In the   Parable of the Talents , we can read: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken even that which he hath.”

The   Matthew effect  seems to get amplified by competitive research funding: In 1984, before the introduction of   New Public Management  to German science policy, less than 10% of German professors completely agreed with the statement “It is always the same people who acquire the funds for their research”. In 2010, i.e., in the middle of the German Excellence Initiative  era, more than 20% agreed [5].Of course, these numbers only show personal opinions and perhaps reflect the hurt pride of unsuccessful researchers. But there is other evidence for the Matthew effect playing a crucial role in funding acquisition.

In a 2006 study with the meaningful title “The Quality Myth: Promoting and hindering conditions for acquiring research funds”, Grit Laudel, another Australian researcher in scientometrics, gives some answers to why the   quality-only  assumption does not necessarily hold and why the money might always pile up in the same hands.

In the study, 45 German and 21 Australian experimental physicists were interviewed about the difficulties of funding acquisition. In abundance, Laudel identifies non-quality related factors that influence the outcome of a researchers’ grant application. Among them are know-how in fundraising and the availability of high-quality collaborators, but regarding the Matthew effect  the most important are probably a continuous research trail of the topic in question and the amount and significance of prior research. One of the interviewed researchers nails the latter down to be a chicken and egg problem. His or her statement reads like this:

“… if you already have lots of publications in an area then it’s easier to get it funded” .

Another German scientists says:

“For completely new things, new ideas you hardly get money. … If you intend to start something new, you need to do some research to show that it works.”[6]

Consequently, young, unknown researchers, who could introduce new and fresh ideas to science, nowadays seem to be under the double amount of pressure: They have to acquire funds to get a better reputation and a better ranking. But they somehow need to get lots of research done before their funding applications will have any prospects of success — an unresolved paradox.

Pressure towards low-risk, mainstream research

Regarding Le Chatelier’s principle, Laudel’s study shows another interesting feature: A competitive research funding system not only influences the scientists’ publication behavior. Among the adaptation strategies that scientists apply for obtaining more funds is also the choice of research topic. A German physicist says:

“The reviewers of the Deutsche Forschungsgemeinschaft (German Research Society) are very reluctant to give you the freedom to just try something”.[6]

Indeed one adaptation strategy of scientists to more competition is to conduct low-risk research often selecting predetermined and “cheap” topics.

Laudel concludes that:

“… it would make sense to have … mechanisms to counteract the pressure of external funds towards mainstream, low-risk, application-oriented research”.[6]

Recently, the European Union has picked up this idea and especially funds so called “high-risk-high-gain” research in a venture approach for funding. The 2012 ERC Advanced Grants Call specifically encourages ground breaking, high-risk research. Maybe this is a first step towards establishing the right countermechanisms to really create diamonds in European science and humanities.

Politics can’t force breakthroughs

Looking at science history, though, there is much room for scepticism towards viable planability of breakthroughs in research. Very often the brilliancy of a new hypothesis or a new observation only becomes visible in restrospective, which is very unfortunate for New Public Management. Take, for example, Alfred Wegener, father of the theory of plate tectonics. Without doubt his ideas were ground-breaking, maybe a bit too ground-breaking: A large part of the scientific community thought of him as a nutcase, a meteorologist trying to get merits with some crazy ideas about geology. His publication record cumulated in one important book on the matter that hardly got cited in the first few decades of its existence. It took until the 1960s for his risqu? theory to become  acceptable: The U.S. Navy got interested in locating submarines and thus enough underwater studies could be conducted to confirm Wegener’s theories.

There are many prominent examples of at first unrecognized theories that later led to paradigm shifts in science: Hans Meerwein, for instance, conducted groundbreaking work in the 1920s postulating carbocations to be reactive intermediates in organic chemistry. His views were accompanied by great scepticism, which discouraged him to carry on with this research topic. In 1994, George Olah received the Nobel Prize for it. Or Mendelian genetics, which were published by Mendel in 1866 but only got accepted after 1915 when the chromosome theory of genetics was formulated.

These stories reflect how difficult it is to thoroughly and accurately judge the quality of research. How can policy makers estimate the value of a new scientific idea if even the judgements of a scientist’s peers can be completely wrong? The theories and discoveries that really change the way we think about the world are hard to detect by standard scientometric indicators and are likely to come from unexpected directions.

Venture capital for creativity

In summary, it seems extremely difficult to influence such a complex system as science with all its players and communication paths in just the right way to push it towards its optimum. A little competition certainly stimulates the research system. On the other hand the pressure applied might cause it to lean towards a completely undesired direction since the backlashes that a certain policy will cause are not always predictable. And at the end it depends on courageous individuals to formulate revolutionary and paradigm shifting ideas. Policy makers should acknowledge the special and unique feature of science and take the lack of plannability into account. Giving room and venture capital for courage, creativity, and risk might be of greater benefit to top-notch science than restriction, pressure, and evaluation. The real diamonds in science will be formed and recognized when their time has come.

— Leonie Anna Mueck


[1] http://www.wisenet-australia.org/issue54/janthomas.htm last accessed in June 2012

[2] L. Butler, “Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts”,   Research Policy  32 (2003) 143–155.

[3] T. Bolli, F. Somogyi, “Do competitively acquired funds induce universities to increase productivity?”,   Research Policy  40 (2011) 136–147.

[4] U. Schmoch, T. Schubert, “Sustainability of incentives for excellent research – The German case”,   Scientometrics  81 (2009) 195–218.

[5] S. Hornbostel, “Resonanzkatastrophen, Eigenschwingungen, harmonische und chaotische Bewegungen” in   iFQ-Working Paper No. 9 , November 2011.

[6] Grit Laudel, “The ‘quality myth’: Promoting and hindering conditions for acquiring research funds”,   Higher Education  52 (2006)  375–403.

Oct 092012

Science has changed. From a vocation to a career path and to the detriment of the subject, according to some observers. From an amusement of the few to an important economic factor and to the benefit of society, according to others. Yet, one thing seems to be clear: Scientists have been irrevocably expelled from the ivory tower.

Back in the old days when words like grant proposal or publication record did not yet exist, the proverbial mad scientists delved into their piece of research, but did not care about the things that happened around them. They gained exciting insights, but did not bother anyone  else with it. Their opportunities were limited, but they were satisfied with them. In contrast, today’s scientists find themselves thrown into an ocean of endless possibilities where they are exposed to the chill wind of competition and hit from every angle by waves like third-party funding, mid-term evaluation, or the particularly notorious h-index.

This essay is not about the question if the time referred to in the first picture has ever existed or if it is just the manifestation of a desire. Nor will we judge if the second picture represents an adequate description of contemporary science. Instead, we seek to present some impressions from the ocean out there, to use the same metaphor as before. For “Science under Pressure”, the overriding topic of this issue of JUnQ, has not only political, cultural, and social implications, but carries a human aspect as well. This latter aspect is most difficult to catch as it can hardly be operationalized or discussed in abstract terms. It can, however, be put into concrete terms based on the experiences of concrete people. How do the people out there set their course? Do they long to return into the ivory tower or do they feel comfortable with their situation? Do they feel intimidated by the waves they encounter or do they enjoy riding them?

To learn about the impact of pressure on young researchers, we interviewed Professor Luka-Krausgrill who is director of the psychotherapeutic service center (PSC)(In German: Psychotherapeutische Beratungsstelle (PBS)) at the University of Mainz. Right at the outset, she points out that pressure is ubiquitous: “In a competition-based society like ours, individuals experience pressure on many levels and for various reasons”, she explains. “Our focus here at the PSC is on undergraduate students. Yet, 6.5% of our clients, about 50 people per year, are Ph.D. students and scientists pursuing a Habilitation.” This latter group, i.e., young scientists, faces multiple challenges: They need to conduct excellent research, teach students, acquire funding, and publish high-quality manuscripts, just to name a few tasks, without being guaranteed a permanent position. Hence, it may take a certain amount of courage to pursue a career in science given the high degree of uncertainty. However, Prof. Luka-Krausgrill advises against singling out science in this regard. “Being put under pressure is not a phenomenon limited to the scientific community and it is not always something negative. Self-created positive pressure can help in keeping track of personal long-term goals. The important thing is to identify these long-term goals and to have a clear idea of the driving forces. This holds especially true for young people at the outset of their career whether in science or other fields”, Prof. Luka-Krausgrill continues. No matter what someone sets course for, he will run into heavy sea sooner or later. How to stay the course and how to cope with defeat under such circumstances, are the crucial questions according to Prof. Luka-Krausgrill and her answer is simple but not easy at all: It is only the intrinsic motivation that keeps people going. Yet, this motivation does not come for free, it must be maintained and cultivated.

It seems that seafaring is not for landsmen. But are scientists really so tough that they can stand the pressure piled on them? At least, our second interviewee probably is: Michaela,(real name

withheld) a physicist who currently holds a junior professorship,(an institution started in 2002 in Germany). This is a 6-year time-limited professorship for promising young scholars which has been introduced as a replacement for the Habilitation. Unlike in the tenure track schemes used, e.g., in the USA, the employing university is not supposed to offer tenure, instead junior professors are expected to apply for professorships at other universities.} emphasizes her genuine interest in and motivation for science. “The strongest pressure I feel stems from science itself”, Michaela explains. “This is what pushed me to pursue a scientific career. In my current position as a junior professor I can freely decide which projects I want to tackle”, she continues. “Acquiring a permanent position, i.e., a professorship is a career goal of mine, but I do not worry about it all the time. I always chose topics I was interested in and did not try to make the smartest career move”, she adds.

Clearly, Michaela is a dedicated scientist, but has she never experienced pressure in a negative way? When asked about it, she mentions one issue above all: As a junior professor, she does not hold a permanent position. Thus, Michaela already knows that her contract will expire regardless of her performance. As a consequence, she will be forced to find a new position in a few years. Having worked in four different countries over the past fifteen years, she has amply demonstrated her flexibility, but her private life, especially her two little children, would heavily benefit if she could plan her future under more stable circumstances. With that said, it is easy to see that Michaela would prefer holding a tenure-track position although she does not feel uncomfortable with her position as a junior professor, including her budget and equipment. Concerning the evaluations junior professors have to undergo, Michaela has observed an impact on her behavior: Her publication record and the third-party funding granted to her constitute important evaluation criteria and that is why she feels forced to spend time in writing research papers and funding proposals. “I would prefer spending this time with actual research”, she says. However, Michaela is aware that publications play a key role in science: “It is important to share your results with the community”, she explains and adds: “Writing proposals is not a pleasure, but it helps in organizing and structuring future research projects.”

Hence, do you think that pressure is helpful for science? “Yes, but only up to a certain extent. I spent a couple of years in the UK as a postdoctoral researcher. One thing I recall from this time is that small research groups were especially struggling, since big science consortia were heavily favored by the British funding system. Furthermore, mainstream research was normally preferred over outlying or unconventional projects. Young scholars were thus not really free in their choice of topic. If you have a crazy idea and want to give it a try”, Michaela points out, “it may be difficult to acquire any funds. That means that funding systems may cause people to concentrate on low-risk


So much concerning the situation of a faculty member in physics. Let us now turn to Mario, a postdoctoral researcher in the field of sociology. Is he confronted with similar issues? Can he confirm Michaela’s statements? “As a postdoctoral researcher, I enjoy great freedom in my work”, he says. “I can just follow my interests and in principle, there is plenty of space for creative and unconventional ideas. The amount of routine work is small, I rather feel like a writer”, he explains and adds that it is hard to find a job outside of science that shares the advantages of his current position.

“But that freedom also implies pressure”, Mario continues. “If I want to continue my career in science, I need to assert my position, but this is not a simple task. Back in the old days, the path to a professorship was quite clear, a career in academic sociology proceeded as follows: You spent an average of six years on your Ph.D. thesis and then got a permanent position as scientific assistant where you worked on your habilitation treatise. At these stages, it was perfectly normal to muddle along for a couple of years without a clear course, there is even a special word for that in the German language: `Herumdoktern.’ The concept of post-doc was adapted from natural science”, Mario explains, “temporary positions like the one I hold used to be uncommon to sociology until recently. But that is not the only thing which has changed: There is increasing pressure to complete your studies more quickly and to spend less time on your Ph.D.. Spending six years on a dissertation as I did is now considered highly unusual.”

Was life easier for a sociologist in former times? Mario is reluctant to agree upon this statement: “We need to bear in mind that the ‘Herumdoktern’ carried drawbacks as well.” He refers to his own Ph.D.: “I enjoyed great freedom, but was completely on my own. I felt like a lone wolf, uncommitted and a bit lost, it was hard to stay motivated under such circumstances and my isolation had serious consequences. For example, I did not publish any paper during my Ph.D., since I thought my work did not meet the requirements. Sharing experiences with peers would have been of great benefit for me. I am quite sure that I would have managed to publish my work if I had enjoyed support by a peer group. In return, I would have accepted a higher degree of pressure. In principle, this applies to my current situation as well: More collaborations would imply more pressure in the form of commitments, but create synergies at the same time. It would be positive pressure”, Mario concludes.

There are, however, other forms of pressure, which Mario is more skeptical about. “I consider teaching an important part of my work”, he says. “But as a teacher I have to cope with increasing time pressure. How much time should I spend on reading a diploma thesis? How much time should I dedicate to supervising students? Such questions come to my mind and I sometimes wonder if I work too slowly. Yet, I am convinced that careful and thorough teaching takes time. My fear is that the quality of my teaching may be affected by the pressure piled on me.” He adds that the quality of the classes does not increase by evaluating them permanently. ”This makes me feel as if I were under suspicion”, he says.

Asked about the role of publications, Mario tells the following story: “Recently I acted as a guest editor and put together a special issue of a journal which I considered quite prestigious according to traditional standards. When I asked about it, some people refused to contribute with the reason that the journal was not listed in the ISI database.”(The Institute for Scientific Information (ISI) maintains citation databases covering thousands of academic journals. Its specialty are citation indexing and analysis.) Mario explains that this is just a visible symptom of a substantial change in publication behavior: “Today, articles in peer-reviewed English-language journals are the only thing that counts while traditionally, the understanding of a publication record was much broader. Monographs, anthologies, and articles in German-language journals contributed to a scientist’s reputation as well. Actually, the situation is even worse”, he continues, “as there exists a further bias in terms of content. There are highly prestigious flagship journals which focus on certain methods and topics which are already established. My science is simply not publishable in such a journal, it is too exotic according to their standards.” Nevertheless, the new standards make an impact on Mario’s mindset. “I have sometimes caught myself assessing fellow scientists by means of their flagship publications even though I am actually aware of how questionable that is”, he says.

Funding is yet another topic Mario worries about: “There are funding opportunities which seem to be only available to the elites in a given field. Most systems favor mainstream topics and well-recognized applicants with a long scientific record over unconventional ideas and people new to the field. Some research institutions have already realized that and provide `venture capital’ for projects which are promising but look too daring in terms of the standards of the `normal’ funding agencies.” Mario adds that such an approach could be a valuable complement to the standard procedure where the applicant’s credentials sometimes matter more than the research proposed.

Now, where we have learned about the sources of pressure, can you tell us how it manifests itself in your life? Mario is reluctant to answer this question: “Weakness is not part of a scientist’s public image, I usually do not talk about how I suffer from pressure, I try to fight it out with myself.” However, eventually Mario shares some of the problems he has faced. “I suffered from writer’s block on several occasions”, he reports. “Also, it happened that I struggled to overcome thinking barriers or that I temporarily lost my creativity. Yet, so far, I have always managed to surmount these problems.” Staying in the metaphor that we introduced at the beginning, we can conclude that even real seamen can get seasick and a question naturally arising at this point is: How to deal with seasickness?

To answer this question, let us return to the PSC and ask Prof. Luka-Krausgrill. “In principle, we can provide help in such a situation”, she says and adds: “The PSC should be seen as an enabler. Our aim is to strengthen people. We cannot relieve any pressure, but we provide the means for coping with it. We cannot supply our clients with a personal goal, but we assist them in setting their goals and defining a path to achieve them.”

However, seasickness is sometimes not related to heavy sea at all and some people would suffer from landsickness if they stayed on the dry land: “In many cases, we eventually found out that problems were related to work only seemingly, but actually rooted much deeper in family and relational issues”, Prof. Luka-Krausgrill explains. Also, a substantial share of our clients (52% in 2009) suffers from an actual mental disorder. In most of these cases, (40% in 2009) a psychotherapy is indicated. We thus propose the clients to undergo a therapy in an external institution. In less severe cases, we offer the clients counseling services as well as group courses here at the PSC. In the latter, we teach for example communication skills and management skills.”

Are people struggling nowadays more often than in the past? In 2011, the PSC had 785 clients, which represents an increase by almost 70% compared to five years ago. However, Prof. Luka-Krausgrill warns against interpreting this surge as an increase of pressure in the scientific system. “Today, people are more open-minded about mental health issues than they were in earlier times. Psychological counseling and related services have gained more social acceptance in recent years”, she explains. “With this said, it is not so easy to establish a connection between the constant rise in the number of clients we observe and an alleged increase of pressure.” As an example, Prof. Luka-Krausgrill refers to the implementation of bachelor’s and master’s programs at the University of Mainz: “We constantly monitor the reasons for approaching us, but we did not see any significant changes related to the switch from the traditional diploma and magister’s programs to the new courses. In public perception, the rumor goes that the new programs put more pressure on students, but at least according to our surveys conducted at the University of Mainz, the problems that students face are quite stable in time. We rather observe an increasing satisfaction among our clients with their studies in recent years. In 2009, 23% were satisfied or very satisfied, while the same applied to 37% in 2011.” However, Prof. Luka-Krausgrill emphasizes one feature of the new bachelor’s and master’s programs: “Since the new courses are more structured, a lot of problems become visible at an earlier stage. A diploma student potentially could keep taking classes for 25 semesters and then leave university without any degree. This has become more difficult with the more structured new programs. For me, that is a good thing”, she concludes.

This sounds like a good closing line. So, let us return to the mainland and recapitulate what we have learned on our trip to the ocean of science. Can we leave the high sea without worrying? We do not need to worry about Mario or Michaela. They are experienced seafarers and not in danger of capsize. With their intrinsic motivation, they have at hand the most valuable nautical instrument: A compass that guides them. That is probably why they prefer seafaring to a calm life on land although they have experienced all kinds of problems arising from heavy sea. They do not object the wind of competition even though it sometimes blows from ahead. Funding, publishing, and teaching are the names of some winds they are exposed to and these winds sometimes form waves that look scary to landsmen. Yet, seamen know how to deal with waves and do not perceive them as scary from the outset: “Positive pressure” was one of the key terms that all our interview partners used.

However, there are other people whom we rather should worry about, namely those about to set sail. We met two seafarers and saw that sailing the ocean of science demands a lot of strength and toughness. “Weakness is not part of a scientist’s public image” as our interviewee Mario put it, but weakness is an inherent part of the human condition. Not everyone can cope with this contradiction, thus, not everyone is ready for a life as scientist. Yet, we also learned about the advantages of a life on high sea and the importance of the inner compass. And this what our final message is about. It was pointed out almost two thousand years ago by Roman philosopher Seneca, but is still valid: When a man does not know what harbor he is making for, no wind is the right wind.

— Thomas Jagau

Oct 092012

Dear Reader,

When the vice president for research of the Johannes Gutenberg-University, Ulrich Foerstermann, entered the Auditorium Maximum on 15 June, there was something strange about the atmosphere in the room. For all intents, the air should have been permeated by breath-taking suspense and everyone present should have lapsed into anticipatory silence. Curiously enough, neither happened. Admittedly, the expression on Foerstermann’s face already suggested what he was about to announce — but most of the audience already knew anyway. JGU had not succeeded in the 3rd line of funding of the Federal Excellence Initiative. Over the preceding months, university authorities, researchers, nonacademic staff and students had invested a lot of time, effort, money and heart in writing proposals, preparing and rehearsing institutional visits, and overcoming stage fright, all in order to eventually be crowned one of Germany’s “elite” universities and — much more important — earn an annual additional 18 million euros of funding. The hopes were cautiously high, yet by far exceeded by the pressure. The message that the unfortunate Mr. Foerstermann had to deliver that afternoon could have easily become a big blowoff. But it didn’t. Physically speaking, prevent-ing a high pressure container from bursting is all about even distribution. So in this case, it may have been just the right distribution of two things that helped averting a blowoff: (1) expectations and (2) — smart phones. The latter worked as veritable safety valves: Rather than hitting all those present at the same time, the bad news reached the crowd via German Press Agency push alert at several points, from which the effect could slowly spread across the room, until pressure was evenly released.

Science under Pressure is day-to-day business for everyone working at JUnQ: After all, we are young scholars at the beginning of our careers. As the final compiling and editing of this issue took place during the final phase and days of the third Excellence Initiative, it should only be appropriate that Science under Pressure is its key topic. It is my great pleasure to present to you this fourth issue of JUnQ. Science and pressure are intrinsically associated: in a way science is always under pressure — its purpose is to find explanations and solutions to societal and ecological challenges, which themselves appear to become more and more pressing. Besides that, however, the scientific system and community itself seem to be organized by a complex interplay of external and internal pressures. During the last decades, the German scientific system has seen quite a lot of changes. Universities have become increasingly dependent on third party funding and with competition growing internationally, time pressure and also the pressure to publish keep rising. Lately, German policymakers count on the potential of competitive spirit to inspire and foster top level research. Of course, the money behind programs like the Excellence Initiative can take a lot of pressure off a university (and certainly could have done so in the case of Mainz) and enable scientists to do out-standing research: In the short-term, it allows for building, hiring staff, buying equipment, and maybe even coping with the steadily growing number of beginning students each semester — creating the vital infrastructural environment excellent research depends on. In the long run, the symbolic power of the evocative title “elite university” alone can create visibility which in turn can be helpful in gaining follow-up funding and attracting high end research(ers).

But what looks so appealing naturally has its downsides, too. As we all know, noblesse oblige, and the exclusive label can raise expectations immensely, which ultimately translates into higher pressure for the individual researcher. The label and funding come with continuous and close observation by scientometricians and questionable measures of quality assessment, and force research into a tight time-regime: The funding period of five years is often not long enough to make progresses that suffice the competition’s criteria. In order for interdisciplinary research cooperatives (very popular with Initiative evaluators) to work, however, there a common ground of understanding has to be established first. This may well take some time, maybe particularly in the humanities and social sciences (who are systematically at a disadvantage in this competition). This operational basis cannot be reached by force and according to schedule. In science, unlike with diamonds, pressure is not the main ingredient needed to form something great: It is time. Among the less fortunate are those universities, clusters and graduate schools who fail to deliver in time. They lose the noble prefix and no longer benefit from the additional funding and reputation. Competition not only offers possibilities to foster excellent research, but also to “fail” with high public impact.

In this issue, we throw a glimpse into the complex inner workings of the pressure equilibrium beneath the scientific system. Obviously, it takes a lot of variables to understand them, and those variables would have to be integrated in a model. Leonie Mueck’s article tries to offer one such model to help us understand the logics and principles of pressure and science, as well as assess the effectiveness of pressure exerted by competitive funding. Besides science as a whole, it is first and foremost the individual scientist, from student to junior professor to dean, who is affected by various sorts of pressure. This may especially be the case for young researchers. The qualification phase is a period of precarious employment and time pressure is always high.  Many young scholars do not know whether they will manage to achieve a professorship until forty, and they make their way hand over hand from one temporary employment to the next. When the pressure gets too high, some make an early exit from academia. Is external pressure an effective motivation for scientists? How do external and personal subjective pressure relate? How do scientists deal with pressure? To get an inside view on these questions, Thomas Jagau talked to scientists under pressure and shares their experiences.

All pressure aside, I conclude this editorial note with three high notes: Firstly, JUnQ has again received public recognition and has been awarded the “Deutscher Ideenpreis” (German Prize for Ideas) 2012. Secondly, the founding of JUnQ e.V. has made progress and soon everyone interested in supporting JUnQ can become member of the association. Finally, and it is my special pleasure to announce this, regular readers will (have) notice(d) that this issue of JUnQ is a first in that it features articles not only from natural and life sciences, but from the humanities as well. We are delighted that the JUnQ-idea is spreading across academic disciplines, and encourage scholars from all academic fields to keep on contributing their current and pending “UnQs” and noticeable attempts to solve them.

Tobias Boll

Aug 252012

Guest article by Konradin Metze

Journal of Unsolved Questions, 2, 2, Preface, XV-XVII, 2012

[download id=”41″]

Konradin Metze , MD, PhD,  pathologist,  is leader of the research group analytical cellular pathology, member of the  National Institute of Science and Techonology on Photonics Applied to Cell Biology (INFABIC) , professor at the postgraduate courses of Medical Pathophysiology and Medical Sciences at the University of Campinas, Brazil and academic editor of the scientific electronic journal Plos One.

e-mail: kmetze at fcm.unicamp.br

The evaluation of science is currently a highly debated matter at universities and research institutions, in scientific journals, and also in the media in general. Researchers want to produce science of high impact. The aim of this essay is to make some critical reflections on the impact of science and especially its evaluation.

First, we have to define the concept of impact of science. It is necessary to think about who or what will be influenced by science. According to this question we can stratify impact in science in four types:

1.The intellectual impact, as the degree of changes of scientific concepts caused by the development or improvement of theories or hypotheses based on observations or theoretical reflections.

2. The social impact, as the degree of changes in life or environment of individuals or groups of people caused by scientific theories or hypotheses.

3. The financial impact as the degree of economic changes of “corporations” supporting scientific activity, such as companies, universities, or governmental departments due to the activity of scientists.

4. The media impact as the degree of the presence of research or researchers in the media.

A strong intellectual and social dimension of science has always been present. Its financial and media impact, however, got an increasing importance in the last decades.

Regarding the question of measurement, the financial impact can easily be defined as a variable proportional to the money spent for research or earned by patents, newly developed products etc. Financial impact includes nowadays also changes of share values at the stock exchange due to new inventions or product recalls (for instance pharmaceutical drugs). We are also able to estimate the media impact in a relatively easy way, for instance by quantifying the number or extent of reports on scientific discoveries or research groups in the lay media or by public opinion research.

Whereas the measurement of the financial and media impact is to some degree easy, this is not true for the intellectual and social impact, since this cannot be done in a direct way. For this purpose we have to look for “substitute variables” (proxys), which can give only rough estimates in an indirect manner. The lack of a generally accepted way of measurement provokes a continuous broad discussion, of course.

One of the main problems is that an impact can only be seen from a historical point of view, that is, we need some observation time in order to know how the community was influenced  by a publication, if this ever happened.  For the intellectual impact, the method to count only the number of publications of a researcher, unfortunately still in use, but must be considered inadequate, because it does not measure the reaction of the community.

A better proxy for the intellectual impact is the number of citations in other scientific contributions to the previous paper. This concept was introduced by E. Garfield in the sixties of the last century. Today there are several data sources, for instance Web of Science, produced by Thomson-Reuters. There we can find citations to scientific contributions published as early as 1898  within a selected pool of journals, with  about 8000 journals in the Science Edition and  about 2700 journals in the Social Sciences edition of 2010. Books and proceedings are becoming to be included recently. Citation counts cited in this essay come from this data source. A similar service is offered by SCOPUS (Elsevier), where the screening for citations of a paper is done in a considerably larger pool of periodicals. This system, however, includes only citations from 1996 on. A freely available, web-based program created by Harzing, lists citations of former publications in web sites [1].

The main question is, whether we can consider the number of citations as a reliable estimate of the intellectual impact of a publication. Generally, the majority of the researchers believes in this. Without any doubt it is better to use the number of citations to a publication than only the impact factor of the journal, where it was published,

The impact factor of a journal is somehow an estimate of the “mean citedness” of an article in this periodical [2].  Its uncritical use for the evaluation of individual manuscripts, single researchers or reseach groups is detrimental to science, because a vicious circle between bureaucrats, researchers, editors, and the impact factor itself will be created [2] . Furthermore, from the point of view of scientific methodology it is nonsense, to use the proxy of a proxy in order to measure something.   Therefore for the evaluation of the intellectual impact the number of citations to a work under discussion is without any doubt better than the use of the impact factor. Some criitical remarks have to be done, however. It is well known that pure methodological papers or technical notes, which do not create or modify hypotheses or theories, may get very high citation counts.  Here are some examples: A meeting abstract written by Karnovsk [3], with a short description of a fixative for electron microscopy was cited 7470 times since 1965. A method for quantifying proteins, described by Lowry and co-workers [4] , has been cited 299.360 times since 1951. An interesting phenomenon was caused by a publication in a crystallography journal in 2008. In this review paper [5], G.Sheldrick described a computer program for the analysis of molecular structures. Furthermore, a link to its open internet access was given, and the phrase added: “This paper could serve as a general literature citation when one or more of the open-source SHELX programs … are employed in the course of a crystal-structure determination.” In about four and a half years after the publication the paper accumulated 26.660 citations. In this case, the citations can be seen as a kind of payment of the free use of a computer program for scientific analysis.  Since the beginning of 2009, all manuscripts accepted by the International Journal of Cardiology must contain a citation to an article on ethical authorship [6]  written by the editor in the same journal in January 2009. Up to the present date 1976 citations can be counted.

In contrast to that, we can demonstrate that highly relevant, revolutionary and paradigm changing publications may have relatively low citation counts. Einstein was honored with the Nobel prize in physics for his work on the photoelectric effect, but his publications on this topic were rarely cited. His main publication on the photoelectric effect from 1905 [7] got 695 citations, which is equivalent to a mean of less than 7 citations per year. Only 89 citations to a subsequent paper on radiation [8] can be found in Web of Knowledge. Georges Lemaitre, a theoretical cosmologist, created the theory of the expansion of the universe, which is also called the “big bang theory”. In 1927, he published his principal ideas in a paper in French, which was cited only 177 times (including 21 erroneous citations)[9]. Four years later he summarized his theory in a communication to Nature [10]. According to Web of Science there are only 24 correct citations, (and additional 21 incorrect ones) to this paper. Finally the revolutionary description of the DNA structure by Watson and Crick [11] has been cited 4065 times since 1953. In other words, there are less citations to the first description of the DNA helix than to Karnovsk’s abstract with a short description of a fixative solution.

Citations are mainly found in papers published in the same area of knowledge or an adjacent field. If the community of researchers is large and very active, the chance of citations of a paper published in the same field is increasing. This can be easily seen when we compare the impact factors of journals of different subject areas. Thomson-Reuters groups journals together according to their fields of knowledge. Table 1 and 2 show the median values of the impact factors of some selected categories. Looking at these data, it is obvious that the probability of a publication from mathematics to get cited is considerably inferior to that of a paper from medicine and that the curriculum of an “average” tissue engineer or molecular biologist will probably contain more citations to his papers than that of a world class mathematician. Therefore, different scientific areas should never be compared by the number of citations to their publications.

This is sometimes also true inside a scientific discipline. The average citations to papers in the field of tropical medicine are much lower than that in oncology or cardiovascular medicine.  How can we interpret these data? One main reason is that there are less researchers who would potentially cite an article in the field of tropical medicine, than researchers working in the field of oncology or cardiovascular medicine. Moreover, companies from the pharmaceutical industry are generally not interested in developing new drugs against tropical diseases for economical reasons. In this case the lack of economic impact reflects negatively on the development of science and the increase of intellectual impact.  The example of the “neglected diseases” illustrates well the existence of important conflicts between the intellectual, social and financial impact of science.

University and governmental bureaucrats might be seduced to misuse the citation numbers of the work of research groups in an uncritical way for the decisions on the distribution of support. As an example, the personal and financial resources for mathematics or history might be reduced and transferred to molecular biology, tissue engineering and other new technologies. Unfortunately, this just happens all over the world with increasing frequency. The consequences will be disastrous on the long run.  A vicious circle may be created: some scientific disciplines, the strongest ones, will drain more resources, get more researchers and in consequence produce more papers. This increases the number of citations to their work and the impact factor of the journals where they publish, and thus the possibility to get new resources. In that way, smaller scientific disciplines might collapse. The ecosystem university, with its plurality of thinking will loose some of their species. Academic life will be more monotonous, but this is not the main problem. We will be unable to reply to the challenge of the social impact on the long run. Science will not be prepared to face relevant problems of mankind in an adequate way and to develop solutions in time. The world population is still increasing, natural resources such as clean water or food are getting scarce. Environmental pollution and global warming continue to be unresolved problems.  Many social, ethnic and religious conflicts generate violence. Therefore, the study of culture, criminology, political sciences, international relations, water resources and food science will probably get increasing importance in the future. If anyone would only look at the impact factors as demonstrated in table 1and 2 , certainly these areas of knowledge would not get priority at the universities. This would be a fatal error for the society.

In summary, although the measurement of the intellectual impact of science by counting citations to publications seems to be the best proxy available at the moment, this procedure should be seen with great caution.  For a global evaluation of science its social impact must be evaluated together with the intellectual one.

Continue reading “Impact of science – some critical reflections on its evaluation” »

Aug 072012

We are proud to present the fourth issue of the Journal of Unsolved Questions. The title of this issue is “Science under Pressure”, in our preface we discuss how evaluation and competition affect science and scientists. Furthermore, we are very honored to have Prof. Konradin Metze as a guest writer with his essay “Impact of Science – critical reflections on its evaluation”.  Highlights among the articles are a contribution from Munich in the field of linguistics about the Whorf hypothesis, and a contribution from Oxford in immunology about lung surfactant proteins.

Have an enjoyable read! We are looking forward to your comments!

Leonie Mueck on behalf of the Editorial Board

[download id=”39″]

Jul 022012

Sascha Henninger, University of Kaiserslautern, Kaiserslautern, Germany

Journal of Unsolved Questions, 2, 2, Open Questions, 10, 2012 (Received 29.05.2012, accepted 22.06.2012, published online 02.07.2012)

The initial research objective was to capture the metabolic heat flux, the heat given off by people’s bodies, in order to determine if it exerts a lasting influence on the air temperature of a space crowded with people comparable to a sold-out stand of a football stadium. …

Read more: [download id=”38″]

The decreasing Whorf-effect: A study in the classifier systems of Mandarin and Thai

 Latest Contributions, Scientific articles, Vol. 2, Issue 2, July 2012  Comments Off on The decreasing Whorf-effect: A study in the classifier systems of Mandarin and Thai
Jun 232012

Fabian Bross and Philip Pfaller

Ludwig Maximilian-University, Munich, Germany

Journal of Unsolved Questions, 2, 2, Articles 19-24, 2012 (Received Feb 14th, accepted June 20th 2012, published online June 22nd, 2012)

The goal of this study was to test a weak form of the Sapir-Whorf Hypothesis dealing with one of the biggest unsolved questions in linguistics: Does language affect the way we think? Grammatical systems in the world’s languages differ in many aspects. Unlike English or German many languages group nouns on the basis of noun classifiers. Recently research has adressed the question if these linguistic categories built up by classifier systems influence non-linguistic thought. In this paper we studied Mandarin Chinese and Thai—two languages with classifier systems. Although both are classifier languages they categorize objects in different ways. We tested if these system differences lead to different similarity judgements of objects in a non-linguistic rating task (participants had to rate the similarity of picture pairs). In contrast to previous studies we suprisingly observed no difference in categorization. It seems that the so-called Whorf effect, i. e. that language affects the way we perceive and categorize the world, diminishes rapidly over the time speakers are exposed to a different language system such as, in this case, German.

Read more: [download id=”37″]

Jun 212012

Thorsten Kahl, Johann Wolfgang von Goethe-Universitaet, Frankfurt am Main, Germany

Journal of Unsolved Questions, 2, 2, Open Questions, 7, 2012 (Received 21.03.2012, accepted 07.05.2012, published online 21.06.2012)

This open question emerged from a footnote I wrote questioning the Spartan civil and political rights in archaic times. Plutarch’s biography of Lykurgos soon became my main source because he describes details about archaic Sparta which can barely be found in other authors’ works. This makes Plutarch indispensable for me. But it is also a problem in source of criticism because we cannot verify or falsify Plutarch’s Life of Lykurgos by comparing it to contemporary, archaic sources – there are none. …

Read more: [download id=”36″]

Mar 312012

Wolter Seuntjens, Dutch Academy of ‘Pataphysics, Amsterdam, Netherlands

Journal of Unsolved Questions, 2, 2, Open Questions, 4, 2012 (Received 05.02.2012, accepted 27.03.2012, published online 31.03.2012)

In the German language, when referring to an artistic representation of a naked human body, the noun ‘Akt’

is employed. This word was not always used to denote such an artistic representation. The combination of

questions – (1) when, (2) where, (3) by whom, and (4) why was the noun ‘Akt’ first used in this particular

meaning – was the starting point of my quest. This article chronicles the inception of the problem and the

vain attempts to solve it. In short, it is the story of the ‘null result’ of ‘Akt’ research, so far.

Read more: [download id=”35″]

The human lung surfactant proteins A (SP-A) and D (SP-D) share similar binding mechanisms and common ligands on macrophages and dendritic cells

 Latest Contributions, Scientific articles, Vol. 2, Issue 2, July 2012  Comments Off on The human lung surfactant proteins A (SP-A) and D (SP-D) share similar binding mechanisms and common ligands on macrophages and dendritic cells
Feb 182012

Anne Jaekel and Robert B. Sim

MRC Immunochemistry Unit, Department of Biochemistry, University of Oxford, UK

Department of Pharmacology, University of Oxford, UK

Journal of Unsolved Questions, 2, 2, Articles 12-18, 2012 (Received Nov. 21st 2011, accepted Jan. 19th 2012, published online Feb. 18th, 2012)

The lung surfactant collectin proteins SP-A and SP-D have been shown to interact with phagocytic cells, such as macrophages and dendritic cells to facilitate uptake of pathogens and apoptotic cells. However, the mechanism by which the collectins interact with the phagocytes and which surface molecules on the phagocytic cells are involved is not yet clear. In the present study, we demonstrate the interaction of SP-A and SP-D with phagocytic cells including human monocyte-derived macrophages and immature dendritic cells. Results show that both proteins bind in a similar manner to both cell types. A prominent 20-22 kDa doublet band was observed on SDS- PAGE analysis as the major Ca2+ -dependent ligand for SP-A and SP-D on both macrophages and dendritic cells. However, we were unable to identify the proteins involved.

Read more: [download id=”34″]