Is this a good publication in your opinion?

(01.02.11)

In a joint project with art researchers, we recently compared the ways in which lay people and those who have studied art history will view, judge and value works of art. On the suggestion of one of our aesthetics colleagues, we formulated a laconic question “is this a good painting in your opinion?” and asked the respondents to rate their answer on a scale of 1-5 from poor to good. Surprisingly, the question worked quite well, even though we didn’t know on what basis each person evaluated the level of ‘goodness’. What if I applied the same approach to manuscript evaluations and asked myself “is this a good work in my opinion?”.

1. Hmph. Just horribly written. This wouldn’t even qualify as a project work. There is nothing new, even though there are so many p-values given on every page. What are the actual findings? Why aren’t there any references to prior literature; rather it seems that the author is simply reinventing the wheel all over again? So annoying, my time would have been better spent, say, skiing. Reject!

2. Say it isn’t so…I have already reviewed this manuscript for two journals earlier and it hasn’t improved despite the comments it received. The results are masked in such verbal acrobatics that there is a danger that some referee will make the mistake of recommending this paper for publication, since it would be too exhausting to try to uncover what is actually hiding amidst all that eloquence. The authors have already wasted the time of six referees to date and, thereby, the funds of taxpayers in several countries. Their idea is apparently to try to squeeze the paper into a high-impact journal without any consideration for its errors. “Major revision” necessary, as I have stated earlier; fortunately I can simply copy my original statement as is.

3. Incremental mainstream science. There is nothing wrong here; the methodology is fine, the number of samples sufficient and the literature review is done well. Replication of results is good, of course, but this study was, perhaps, as tedious to do as it was to read. However, maybe the process enabled the authors to learn new methods and something about the art of writing a scientific paper. There were more than 12 authors; I wonder who is responsible for the work as a whole? This smacks of salami science too, with the results sliced into little pieces for publication; I would guess that these researchers are rewarded based on their number of publications. Well, so be it. A “minor revision” is needed, and then join the masses.

4. Pretty exciting results and clearly presented. An unbelievably simple idea. I wish I had thought of it, but I didn’t, even though the bait had already been on the line for years. Print it without delay so that we can enjoy and learn from this, even as a large group. 

5. It is amazing what a researcher can come up with! At these moments, it is fantastic to be part of the science community, even if just in the role of referee. Here we see another example of a true Gyro Gearloose, who has turned obvious things on their head and set the ball rolling in a totally new direction. If only I would get such a revelation one of these days.

There we have it, the whole stack has been evaluated and all my opinions have been convincingly justified. But would my evaluation have changed if I had known that I had just read the 42nd publication by senior researcher N.N. for this year (has he really seen this paper?), that the research was done with minimal resources (should I temper justice with mercy?), that this highly critical individual has never published anything in this field of science (such arrogance!), that one of the authors had to withdraw three articles last year due to academic fraud (there must be something fishy about it), that this article was submitted and nearly approved for publication in one of the top journals in the field (so it can’t be that bad), that the author is my good friend (I am not that familiar with the topic, but I know my friends wouldn’t publish rubbish), or that these researchers have, in their presentations, criticised our excellent results (those bastards, wouldn’t I just like to show them). 

It seems to be an illusion that I would be able to be more objective in my evaluation of scientific publications than I am in assessing the aesthetic value of a work of art. They both rely simply on my own general impressions. Which of these paintings will be worth their weight in gold in 20 years or which publication will inspire a brand new area of research? I can’t say, but it is certainly nicer to be working with quality rather than a poor cultivation of the mind – regardless of the field.

Riitta Hari

Academy Professor, Academician of Science
Aalto University, School of Science

 

Last changed 04/02/2011

Academy Professor, Academician of Science Riitta Hari