Jan 182011

Dear Reader,

My first encounter with null-results in research happened right after graduating from secondary school. Not one out of a seemingly infinite number of polymerase chain reactions (PCRs) that I conducted as an intern in a molecular biology lab worked. I had tried to vary every possible parameter of the technique and at the end there was not much left to do than giving up. Frustrated as I was, I turned to a PhD student, who consoled me by showing me a list of reasons entitled “Why your PCR doesn’t work”. He pointed to the last item and with great disbelief I read: Bad karma, God is punishing you.[1] Back then my mind was filled with Popper’s idea that “the striving for knowledge and the search for truth are the strongest motives of scientific discovery”.[2] But after this incident I began to have doubts: Is this how exact science really worked, like black box and black magic, relying on belief rather than knowledge? Naive as I was, I did not understand that this ideal ivory tower of science is not as immaculate as I had imagined it, that the “free competition of thought” [2] is frequently biased by irrational human weaknesses, by politics, money, fashion – and last but not least current publication practices. During my scientific upbringing I stumbled upon several absurdities where – in my humble opinion – publication practices simply hindered the advancement of science. For example the organic reaction one of my supervisors had thought of. It looked great in paper-and-pencil chemistry but mysteriously did not work, no matter how hard we tried. Why was such a curious case not publishable anywhere? Or the power that an editor had over the methodology that was utilized in one of my projects – if you did not use the editor’s own methodological developments he would not rate the paper as publishable. Digging deeper into such occurences reveals that these were not only some unlucky events in the life of an undergraduate trying her first careful steps on the stage that is science. Let us look at some established facts.

In the 1990s a hot frenzy in evolutionary biology emerged after the exciting finding that females are more likely to mate with a male exhibiting mirrored halves. Set off by a Danish zoologist, who measured the symmetry of male barn swallows’ feathers and correlated it with their reproductive success, scientists found the effect everywhere, in all kinds of species. Since mutations have long known to be related to the asymmetrical appearence in a being – to “fluctuating symmetry” – the results seemed extremely plausible. But then suddenly this remarkable connection between aestethics and genetics seemed to fall apart. While in the early 1990s almost all independent studies confirmed the original finding, in 1997 only four of twelve published studies yielded positive results.[3,4] What had happened to fluctuating symmetry? A biologist of the University of Western Australia, Leigh Simmons, gives a hint. Trying to apply fluctuating symmetry theory to horned beetles he could not confirm the effect. Bad enough, but he states that “the worst part of it was that when I submitted these null-results I had difficulties getting
them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then”.[4]
How does that fit with the free competition of thought?
Let us switch to genetics. As pointed out in the beginning, a big portion of molecular biology techniques seems somehow related to black magic rather than reproducibility and unambiguity. Various investigations show that this is not only the subjective impression of a helpless intern. When John Ioannides, professor at the University of Boston, Massachusetts, and co-workers tried to reproduce micro-array based gene expression analyses in two independent teams, they could only reproduce two out of 18 analyses published in Nature Genetics (2005-2006) fully. They state that the discrepancies were only partially due to obtaining different results, most difficulties were caused by incomplete data and by ambiguous specification of the methods.[5] In which way does it help the advancement of knowledge if in the majority of studies important methodological information is held back making the results irrepreducible?

The readers may remember the scandal involving Jan Hendrik Schoen that shook the world in 2002. The German physicist working at the Bell Laboratories had knowingly falsified data for at least 3 years and – although loads of groups jumped on the train of his extraordinary findings – nobody noticed.[6] Could this have been partially avoided by making also nullresults available to the public or by requiring more raw data to be submitted in a publication?

One last example, this time from organic chemistry. Flipping through recent literature covering the art of organic synthesis, the reader will find that most conducted reactions end up having a yield of >95%. It seems impossible to publish the synthesis of a natural compound or a new method involving a reaction yield lower than this threshold. If you have ever worked in organic synthesis, you probably share the scepticism about such high yields with Tomas Hudlicky and Martina Wernerova from Brock University in St. Catherines, Canada, who sat down to assess yields in organic chemistry reported from 1955 to 2005 [7]. The study revealed that the habitual report of these almost perfect reaction yields started only in the 1980s – and to a large degree the reproducibility gets lost at this point in time as well. Carefully measuring yields during the process of extracting the pure product from the reaction mixture, the authors argue that in each work up operation around 2% of the product is lost. If an organic reaction requires the usual 3 work up steps (extraction, filtration, evaporation) yields >94% are considered unrealistic. The reasons that the authors identify for those obviously flawed reaction yields are twofold. One is the difficulty to measure yields precisely, as the reaction scale decreases – and it steadily has decreased since the 1980s. The second is the pressure on the scientist for his method to be outstanding in the flood of published syntheses and the related tendency of the scientist to “deliberate adjustments” as the authors term it. [8]

In a recent meta-survey, Daniele Fanelli from the University of Edinburgh fixed the number of scientists who admitted to knowingly fabricate or falsify data to around 2%. [9] But in the above example a whole community knows that scientists are “deliberately adjusting” their reaction yields and everybody just plays along?

In my opinion all the examples mentioned above and the numerous ones that did not find their way into this editorial note can only lead to one conclusion: We cannot cope with bad scientific practice by simply introducing an “ethics of science” course to all undergraduate curricula. We need to start with a change in the current publication practices. We need a journal that comprehensively celebrates an alternative handling of the Big Unknown, a journal that honestly shows how many scientific projects end up with ambiguous or irreproducible data thus changing the “winning” culture of science into a “losing” culture – and a “losing” culture it definitely is, since most scientific efforts show us how little we actually know.

There are some jewels among the journals that partially fulfill these demands. In organic chemistry, Organic Syntheses [10] should be mentioned, a journal that has been active since 1914 and reproduces every submitted reaction procedure in one of the labs of the editors, or the Journal of Articles in the Support of the Null Hypothesis that publishes experimental studies from all areas of psychology that did not reach statistical significance in order to avoid bias of editors or reviewers against studies that did not reject the nullhypothesis.[11] In medical sciences, the Journal of Negative Results in Biomedicine should not be forgot, which also focuses on publishing excellent scientific work with negative or null results.[12]
Founding JUnQ, the Journal of Unsolved Questions, we want to tie all the loose ends of all those innovative ideas together to establish a universal and interdisciplinary platform for publishing, discussing, and reflecting on the importance of null-results and open questions in order to reintroduce the ideal of the “free competition of thought” to the publication business.[13] A lot of students, post-docs, and professors whom we talked to in the course of the last half year told us that they did not believe any scientist would admit to “have failed” with his research project so openly. At JUnQ we believe that null-results should not be considered a “failure”, but that one should make the best of them – they might be a missing piece in the puzzle of knowledge that we are trying to solve.
I think the first issue of JunQ that you, dear reader, are holding in your hands is a sign that this idea might be successful. Of course, there is a lot of work left to do in order to be recognized as a reliable and sincere scientific journal. In the next year we aim at steadily increasing the number of contributing authors and published articles in order to gain more importance and publicity within the scientific community. Our goal is a a biyearly release of JunQ, which would allow us to obtain an ISSN number within the next 36 months. Promoting JUnQ on various platforms in internet and real life, we want to persuade a growing number of scientists of the benefit of reading and publishing in JunQ.

Furthermore, we are currently organizing a lecture series in the framework of the MAINZ Graduate School of Excellence entitled “Publish or Perish…?”, where current topics regarding good scientific practice, ethics, and philosphy of science, and the publication business will be discussed. The first lecturer will be Prof. Dr. Siegfried Hunklinger, ombudsman of the Deutsche Forschungsgemeinschaft for good scientific practice, on April 13th he will talk about “Honesty in Science”.

At the end of this editorial note the JUnQ editorial board would like to thank all the supporters without whom this first issue would not exist. We express our thanks to all contributing authors for being so bold to publish in this unconventional project. We thank the advisory board for ongoing support in all fields of action. Last but not least we want to thank the MAINZ Graduate School of Excellence. Within the MAINZ framework the idea for JUnQ was born and the Graduate School has strongly supported us from the start.
We wish you an enjoyable read of the first issue of JUnQ!
Leonie Mueck on behalf of the editorial board

[1] http://www.bio.uio.no/bot/ascomycetes/PCR.troubleshooting.html , accessed on Dec 16th 2010
[2] Popper K, (1934, 1959): The logic of scientific discovery, Routledge publishers (ISBN 0-415-27844-9)
[3] Jennions MD, M?ller AP (2002): Proc Biol Sci. 2002 Jan 7;269(1486):43-8.Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution.
[4] Lehrer J (2010): The New Yorker, Dec 13 2010, The Truth Wears Off
[5] Ioannides JPA et al. (2008): Repeatability of microarray gene expression analyses (Nat Genet. 2009 Feb;41(2):149-55. Epub 2008 Jan 28.)
[6] Reich ES (2009): Plastic Fantastic. How the biggest fraud in physics shook the scientific world. MacMillan Science. New York. Report of the Investigation Committee on the Possibility of Scientific Misconduct in the Work of Hendrik Schoen and Coauthors Bell Labs Research Review Report, September 2002
[7] Wernerova M, Hudlicky T, (2010): On the Practical Limits of Determining Isolated Product Yields and Ratios of Stereoisomers: Reflections, Analysis, and RedemptionSynlett 2010(18): 2701- 2707, DOI: 10.1055/s 0030-1259018
[8] Lowe D (2010): 99% yield? That, friends, is deception. http://pipeline.corante.com/archives/2010/11/12/99 yield that friends is deception.php, accessed Dec. 26th 2010
[9] Fanelli D (2009): How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data, PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738 (2009)
[10] http://www.orgsyn.org/ ,accessed on Dec. 26th 2010
[11] http://www.jasnh.com/, accessed on Dec. 26th 2010
[11] http://www.jnrbm.com/, accessed on Dec. 28th 2010
[13] Ioannides JPA (2006): Journals should publish all null-results and should sparingly publish “positiv” results, Cancer Epidemiol Biomarkers Prev January 2006 15; 186

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>