Tag Archives: science publishing

Junk pedagogical research


Can I teach? I think so. Should I aspire to publish papers telling other people how to teach? Probably not. This is me showing students how to estimate tree height.

In my last job I was employed on a teaching-track position*. For many years this worked reasonably well for me. I enjoy teaching, I think I’m quite good at it, and I didn’t mind a slightly higher load as the price of not needing to satisfy arbitrary targets for research grant income or publication in high-impact journals. That’s not to say I stopped doing research, because obviously that didn’t happen, but I accepted that there was a trade-off between the two and that I was closer to one end of the spectrum. It still left me three clear months every summer to get out into the field and collect data.

Many UK universities developed teaching-track positions in response to the national research assessment exercise (the REF**) which incentivised them to concentrate resources in the hands of a smaller number of staff whilst ensuring that someone else got on with the unimportant business of running the university and the distraction of educating undergraduates. Such is the true meaning of research-led teaching.

A problem began to arise when those staff who had been shuffled into teaching-track positions applied for promotion. The conventional signifiers of academic success weren’t relevant; you could hardly expect them to bring in large grants, publish in top-tier journals or deliver keynotes at major conferences if they weren’t being given the time or support to do so.

Some head-scratching took place and alternative means were sought out to decide who was performing well. It’s hard enough to determine what quality teaching looks like at an institutional level***, and assessing individuals is correspondingly even more difficult.

The first thing to turn to is student evaluations. These largely measure how good a lecturer is at entertaining and pleasing their students, or how much the students enjoy the subject. Evidence suggests that evaluations are inversely proportional to the amount that students learn, as well as being biased against women and protected minorities. In short they’re not just the wrong measure, they’re actively regressive in their effects. Not that this stops many universities using them of course.

What else is there? Well, being academics, the natural form of output to aim for is publications. It’s the only currency some academics understand. Not scientific research papers, of course, because teaching staff aren’t supposed to be active researchers. So instead the expectation became that they would publish papers based on pedagogical research****. This sounds, on the face of it, quite sensible, which is why many universities went down that route. But there are three major problems.

1. Pedagogical research isn’t easy. There are whole fields of study, often based in departments of psychology, who have developed approaches and standards to ensure that work is of appropriate quality. Expecting an academic with a background in biochemistry or condensed matter physics to publish in a competitive journal of pedagogical research without the necessary training is unreasonable. Moreover, it’s an implicit insult to those colleagues for whom such work is their main focus. Demanding that all teachers should publish pedagogical research implies that anyone can do it. They can’t.

2. Very few academics follow pedagogical research. That’s not to say that they shouldn’t. Most academics teach and are genuinely interested in doing so as effectively as possible. But the simple truth is that it’s hard enough to keep track of the literature in our areas of research specialism. Not many can make time to add another, usually unrelated field to their reading list. I consider myself more engaged than most and even I encounter relevant studies only through social media or articles for a general readership.

3. A lot of pedagogical research is junk. Please don’t think I’m talking about the excellent, specialist work done by expert researchers into effective education practice. There is great work out there in internationally respected journals. I’m talking about the many unlisted, low-quality journals that have proliferated over recent years, and which give education research a bad name. Even if they contain some peer review process, many are effectively pay-to-publish, and some are actively predatory. I won’t name any here because that’s just asking for abusive e-mails.

Why to these weak journals exist? Well, we have created an incentive structure in which a class of academics needs to publish something — anything — in order to gain recognition and progress in their careers. A practice which we would frown upon in ‘normal’ research is actively encouraged by many of the world’s top universities. Junk journals and even junk conferences proliferate as a way to satisfy universities’ own internal contradictions.

What’s the alternative? I have three suggestions:

1. Stop imposing an expectation based on research onto educators. If research and teaching are to be separated (a trend I disagree with anyway) then they can’t continue to be judged by the same metrics. Incentivising publications for their own sake helps no-one. Some educators will of course want to carry out their own independent studies, and this should be encouraged and respected, but it isn’t the right approach for everyone.

2. Put some effort into finding out whether teachers are good at their job. This means peer assessments of teaching, student performance and effective innovation. All this is difficult and time-consuming but if we want to recognise good teachers then we need to take the time to do it properly. Proxy measures are no substitute. Whether someone can write a paper about teaching doesn’t imply that they can teach.

3. Support serious pedagogical researchers. If you’re based in a large university then there’s almost certainly a group of specialist researchers already there. How much have you heard about their work? Have you collaborated with them? Universities have native expertise which could be used to improve teaching practice, usually much more efficiently than forcing non-specialists to jump through hoops. If the objective is genuinely to improve teaching standards then ask the people who know how to do it.

If there’s one thing that shows how evaluations of teaching aren’t working or taken seriously it’s that universities don’t make high-level appointments based on teaching. Prestige chairs exist to hire big-hitters in research based on their international profile, grant income and publication record. When was the last time you heard of a university recruiting a senior professor because they were great at teaching? Tell me once you’ve stopped laughing.



* This is now relatively common among universities in Europe and North America. The basic principle is that some staff are given workloads that allow them to carry out research, whilst others are given heavier teaching and administrative loads but the expectations for their research income and outputs are correspondingly reduced.

** If you don’t know about the Research Evaluation Framework and how it has poisoned academic life in the UK then don’t ask. Reactions from those involved may vary from gentle sobs to inchoate screaming.

*** Which gave rise to the Teaching Evaluation Framework, or TEF, and yet more anguish for UK academics. Because the obvious way to deal with the distorting effect of one ranking system is to create another. Surely that’s enough assessment of universities based on flawed data? No, of course not, because there’s also the Knowledge Evaluation Framework (KEF) coming up. I’m not even joking.

**** Oddly textbooks often don’t count. No, I can’t explain this. But I was told that publishing a textbook didn’t count as scholarship in education.


What academic journals should I follow?


Yay, more journal issues have arrived! I’ll add them to the heap. It’s becoming a fire hazard.

A few years ago I bemoaned the fact that I had effectively stopped reading the academic literature. Despite apparently being a common phenomenon among mid-career academics, at least based on my conversations with colleagues, it provoked a nagging guilt. How can we tell our students to read constantly if we don’t practice what we preach?

Over the years the table of contents emails continued to pile up, causing permanent low-level stress as I realised how much interesting, relevant and important science was simply passing me by. But there was no time to do anything about it, nor would there ever be. With a heavy heart I deleted them all. This has, in effect, blinded me to several years of output in almost all of the journals that I used to follow*.

That’s not to say I haven’t been reading any papers. Every time I need to write a manuscript, proposal or lecture, I’ve carried out a targeted search and found what I needed to get the job done. This is a limited way to learn about science though; it doesn’t expose you to as many new ideas. I was raiding the literature, not reading it.

I’ve now come up with a new system based on the principle that it’s better to do a small amount well than attempt too much and fail. This involves selecting ten journals for which my aim is to scan the contents for every issue, and read the papers that are most compelling. They make up my ‘essential’ list. Next are a set of ten for which I will scan them if I have time, but if the next issue comes out before I’ve had a chance, they’ll be ignored. This means I will only follow a maximum of 20 journals at any given time**.

Essential: Science, Nature, PNAS, PRSB, Nature Ecology & Evolution, Ecology Letters, Ecology, Journal of Ecology, Ecological Monographs, TREE.

Time-permitting: American Naturalist, Nature Communications, Methods in Ecology & Evolution, Frontiers in Ecology and the Environment, PPEES, Forest Ecology and Management, J Veg Sci, Biotropica, GEBJournal of Biogeography.

I could easily list another ten, or twenty, that I would love to read if there were room in my life, but there isn’t. It’s been a tough decision-making process. If you’ve tried something similar, then what did you end up with? How did you decide? If anyone is interested then my rationale for selection is below the fold.

I’ve focussed here on how to keep pace with new literature. It doesn’t even mention other issues such as the value of reading older papers, reading outside your own narrow field of study, or whether sometimes it’s best not to read at all. Some people will even argue that the whole concept of journals is becoming obselete, and in a world of online search engines we no longer need them as anything other than gatekeepers. I have some sympathy for this view, but the Brave New World has yet to arrive, so I’m making use of the system we have.

Continue reading

Unpublished works

A few years ago I attended a workshop session on publishing for early-career scientists. One earnest delegate spoke up in favour of submitting work to local journals, especially if you work overseas. It helps build science in your host country, demonstrates willingness to engage with their institutions, and ensures that all your research gets published — even the bits that more prestigious journals might look down upon. For many natural history observations this is about the only way to get such findings into the literature.

I politely disagreed, specifically for early-career researchers, while accepting all the points they made. There is an important skill to learn, and it’s that of letting go. If you can write the big prestigious paper, then write the big prestigious paper. If you can’t, go back to the field/lab/computer and get the data you need to write it. Don’t waste time on the small stuff. It won’t help your CV, and all these noble intentions count for little if you don’t get a job. Recruitment panels won’t care about your lovely paper in the Guatemalan Nature Journal*.

Some people believe that all this unpublished work is a problem for science. Jarrod Hadfield recently wrote, in a provocative meeting report for the Methods in Ecology and Evolution blog, that preregistration of analyses would ensure that “the underworld of unpublished studies would be exposed and their detrimental effects could be adjusted for.” He notes, then dismisses, concerns about the extra workload involved or the frequent changes of plans that take place due to unforeseen circumstances.

Would you, as Orpheus, wish to venture into the underworld? Then look upon my file drawer and weep.


The cabinet of broken dreams. Beware: when you gaze into the file drawer, the file drawer also gazes into you.

This is filled with countless manuscripts at various stages of abandonment. Much sound data collected during my PhD with blood, sweat and tears (all quite literally) languishes here, almost certain to never see the light of day. Likewise there is still unpublished data from my second post-doc. Why have I allowed so many potential publications to rot? How can I live with myself while denying the wider scientific community access to this information?

There’s a simple answer — I had more important things to do. Every active decision you make in life to do something has a consequence elsewhere. Even writing this post. Sometimes I needed to work on another, better paper. The rest of the time I had to do all the things that keep me employed (teaching, administration, grant applications) or sane (sleeping, reading, holidays, drinking).

One thing I’ve learnt in recent years is that the hassle of publishing in a small journal isn’t that much lower than a large journal. There are several reasons for this:

  • Preparing the manuscript is no less time-consuming. Even though the expectations for data quality might be lower, the processes of analysing data, finding and reading the literature, preparing figures and putting everything together are much the same.
  • The quality of reviews is often lower for smaller journals (or at least the variance in quality is higher), increasing the amount of time it takes to respond to them. This shouldn’t be the case, but experience clearly indicates that it is.** Don’t vainly expect the journal to be simply grateful to receive your submission.
  • Lower-ranking journals employ smaller editing teams working with fewer resources. This might not seem like a big deal, but once your paper is accepted it makes all the difference. In a mainstream journal the proofs are turned around quickly and without fuss. It can be on the website in no time. In minor journals you might end up doing much of the legwork yourself. ***

There are sometimes good reasons to publish in a small journal. If you’ve put all the effort into writing a manuscript that was rejected higher up, then go for it, you’ve already invested the time ****. When moving into a new field I like to publish something small just to prove to myself that I can; it also helps with getting my head around a new literature. As a student there’s also great value in getting your first publication anywhere you can, just to experience the process.

What I advise against is writing a paper which you intend from the outset to submit to a small journal. Many studies in ecology don’t get published solely because there’s something better to do. Maybe the results were too complicated to tell a neat story, or couldn’t be easily explained. Maybe all the tests came out insignificant. Given a choice, any scientist should write up the paper with the greatest chance of getting published in a good journal. The small ones are unlikely to provide the same return on your time investment.

The file drawer problem doesn’t occur because we have something to hide, although this may well be true of medical trials or in some highly competitive fields. It’s mostly because we don’t have time. Learn to let go or else the ghosts of unpublished papers will haunt you for the rest of your career.



* Don’t get upset with me over whether they should, the point is that they don’t.

** The reason is pretty obvious. If I receive a review request from Big Name Journal then I know that (a) the authors thought it was important enough to submit there and (b) a specialist editor agreed with them. I’m therefore likely to be interested in it. On the other hand, if I receive a review request from Journal Named After Taxon, I might see which of the post-grads is checking Facebook and offer them a valuable learning experience.

*** In one case I’ve spent more time on editing post-acceptance than I did on writing the paper. I won’t reveal which, but let’s just say that their demands corresponded to neither the website’s Instructions to Authors nor the Chicago Manual of Style.

**** This is only true if your paper was rejected either for not being a good fit or for not quite being interesting or novel enough. If there were fundamental and irredeemable errors with the work then persisting would be a case of Concorde fallacy. Chalk it down to experience and concentrate on fixing the problems for the next manuscript.

Why I stopped reading the literature

This year I stopped reading the academic literature. Not entirely, of course — that would be career suicide. Nor is this a deliberately awkward response to the latest hashtag tyranny of #365papers, where fellow academics post how many papers they’ve read either to impress others or make them feel guilty. Mine was an accident that has settled into a default state.

For the last decade I have been able to claim with confidence that I read roughly 1000 papers a year. Now when I say read, you should be given to understand that this doesn’t mean poring over every single word. The normal protocol is to read the abstract, skim the introduction, flick through the figures then read the discussion until it gets boring*. If there’s anything that needs further scrutiny then I’ll look more closely, but it’s rare that the methods will receive more than cursory attention, perhaps checking for a few key words or standard techniques. I think most academics would say that in practice this is how they read papers.

By the end of the week I’m not mentally capable of intellectually-demanding work like writing manuscripts or analysing data, unless the pressure of a deadline forces me into it. So I’ve tended to hold Friday afternoons as a drop-in time for my group, and spent the gaps between meetings looking through recent journal issues and reading papers. This has helped me keep up to date with novel ideas, exposed me to new studies, and honed my awareness of what types of things are getting published.

My pattern of work all changed in the last academic year because I was inflicted with a new module with sessions scheduled in the Friday afternoon slot. No-one wants that time, least of all the students. It’s perhaps only marginally less unpopular than 9am on a Monday morning. Who wants to be in a lecture when there are pubs to go to? (I mean on Fridays, not 9am on Mondays. We’re not all alcoholics in the UK.)

My journal alerts system (I use Zetoc) has build up over the years to incorporate a wide array of sources. There are tables of contents for particular journals, search terms for the fields that I specialise in, and even a few names of colleagues whose work particularly interests me. I’m lucky enough to not need to keep track of competitors because I work in a field that no-one cares about so there’s little risk of being scooped**. At this moment the total number of unread alerts is about to pass 300. Catching up on all of those has reached the point where it’s simply impossible, unless I take a few weeks’ holiday and spend the whole time on academic reading. Which I’m not going to do.

When I was a (more) junior academic I remember being told by (more) senior academics that they didn’t read the literature any more. This struck me as a great pity. One phrase that I heard second-hand, supposedly from Chris Thomas, was that he no longer reads the literature — he raids it. If you’re writing a manuscript and need a reference to make a specific point then you go looking for an appropriate paper rather than attempting to follow everything. Another colleague told me that he expects his group to be his eyes into the literature, and relies on them to spot important new publications, which he gleans from their manuscripts and recycles into the next grant proposal.

With mixed feelings I’ve realised that I’m now headed in the same direction. I’m coming to terms with the idea that, in many cases, my graduate students have a firmer grasp of the frontline of the field than I do. Perhaps this isn’t such a bad thing. Over the last few years while writing a textbook it’s been necessary for me to keep on top of the literature to make sure I’m up-to-date. When covering so many subjects at once this is an overwhelming task. Delivering the final copy to the publishers removed the ongoing pressure to read and read more. But why was that process not fun? How can someone who loves his research and is passionate about his field not unequivocally enjoy the process of reading and discovering more about it?

A clue comes from a Masters-level class on science writing that I’ve just finished. This year I introduced a new exercise: the students were asked to come along with a piece of writing that they enjoyed reading. This could be anything at all — a book, website, magazine, paper — so long as it was in prose. Out of a class of 35, only one brought an article from a scientific journal. There were a handful with popular science books (Dawkins, E.O. Wilson), but the overwhelming majority arrived carrying fiction books.

What does this tell us? A small sample size, I know, but at least it’s an indication. These keen and bright students, at a top university***, immersed in the scientific literature, don’t first think of an academic paper when they’re asked about the most enjoyable things they read. This is probably because, for the most part, academic writing is terrible. Not many people would choose to read it for fun in their spare time. I read constantly at home — but the pile of papers in the corner isn’t the first thing I reach for.

The purpose of our class exercise was to look at the structure of enjoyable writing and see whether there are lessons that can be learnt for our own work. The pointers were perhaps predictable but nonetheless helpful: shorter sentences, simpler words, a focus on engaging rather than impressing the reader. My hope is that one day some of these students go on to produce a higher quality of scientific prose than the general average. Perhaps, in our small ways, we can redirect the tenor of academic writing and make it more pleasurable to read. Who knows, it might get me reading again.

* They all do, even mine. It’s the point where the author switches from actually discussing the results and their implications, and moves on to tenuous speculation or unnecessary criticism of other people’s work.

** This isn’t quite true on two counts. Firstly, there are plenty of people working on spatial self-organisation in natural systems. My experience, however, is that they’re (almost) all nice, supportive and collegiate people who encourage one another. I’ve never got the impression that there’s any competition. The other reason why scooping isn’t so much of a risk is that in ecology, data is king. No-one is going to beat me to publishing papers on Kamchatkan forest organisation because I’m pretty sure that no-one else has those kinds of data.

*** That’s what we’d like to believe, anyway. We do pretty well in some league tables but aren’t as impressive in others. Mostly we end up in the global top 100 and the UK top 20.

Consult the index

I’m presently mired in what is one of the most tiresome, tedious tasks I’ve had to perform in my academic career. Bear in mind that I say this as someone who spent three years tracking levels of herbivore damage on 20 000 individual leaves as part of my PhD. I’ve counted pollen. I’ve catalogued herbarium specimens. This is an order of magnitude worse.

The task at hand is to produce an index for my textbook, Natural Systems: The Organisation of Life*, which is finally due to be published in March 2016. I knew that indexing would be hard. I didn’t quite appreciate how hard. And that’ while using LaTeX, which makes everything much more straightforward. I can’t even imagine having to do this in hard copy or (shudder) in Microsoft Word**.  There are some useful guides to indexing. There’s even a book called Index Your Book Fast, though one suspects that the time taken in reading it would more than offset any gains. None of them make it any easier.

While it’s not difficult to imagine an ideal index in abstract terms, actually putting one together is trickier. I’m currently working through the book sentence-by-sentence, deciding whether this or that term is a passing or substantive mention, whether it needs to be nested within other groups, and when I might ever finish. Who or what deserves a place in the index? Main concepts are obviously in. What about taxa, important people, study sites, species… where does it end?

As a book reviews editor myself (for Frontiers of Biogeography) I’m acutely aware of that typical complaint by reviewers that ‘subject X doesn’t even make it into the index!’ This could mean any number of things: that the subject isn’t covered by the book, that the index has omitted to mention it, or that the reviewer hasn’t read the book properly. A skim of the index is often one of the first things a prospective purchaser does while browsing and forms a central element of the impression a book makes. Getting it right is crucial because it makes a book more useful to future readers. Too long or trivial and it’s overwhelming; too short and it looks skimpy.

One might ask why I’ve bothered writing a blog post about a topic so dull as indexing (although if you’re finding this particularly fascinating then you should read The Indexer, the international journal of indexing). In part it’s as a corrective to recent posts which may have given the false impression of my life as one of tropical jaunts spent being pursued by dangerous animals. All that happens, but actually 9 months of my year is spent in front of a computer screen. I’m also keen that you realise, when you turn to the back of a book and flick through the index, that a surprising amount of work has gone into preparing it. And, in my case, a surprising amount of wine.

* The blurb on this site is a cut-and-paste from the original proposal, submitted three years ago, and doesn’t really capture the book content. The cover image is also under review right now. All this will be filled in over the next couple of months.

** I haven’t used Word in several years, and it’s made my life immeasurably happier. You could do the same.

The Law of Good Enough (or why your thesis will never be finished)

I spent quite a bit of time recently meeting our section’s post-graduate students for tutorials. In some cases this is to welcome new arrivals, or to catch up on progress from those who have been away on lengthy field seasons. The ones I most enjoy seeing are those  who are busy writing up — because they’re the ones I’m most able to help.

It can be difficult to persuade a postgrad staring down their thesis deadline that 15 minutes in my office is time well spent, which I fully understand. Usually they are stressed, feeling the pressure and unable to focus on anything other than the thesis. Much of this derives from a sentiment I hear echoed again and again in various forms: “I just want to do the best job I can”.

No. Stop. This is not the way to approach a thesis. You need your thesis to be good enough.

This shift in attitude is hard to accomplish when your whole academic career has been geared towards achieving the highest mark possible, or at the end of four years when you want to have something on your shelf to be proud of, that you can look at and think “I wrote that” and feel a warm glow inside. Allowing yourself to fall into this vanity trap is pathological, and the root cause of a lot of unnecessary stress on the part of post-graduates.

Your thesis is the means to an end, which is graduation. When the day comes, you will walk across a stage for 20 seconds, shake someone’s hand, collect a piece of paper and get a photo taken in a silly gown. It doesn’t matter if you’ve written the most perlucid, inspiring and impressive thesis of all time. No-one will clap any louder, or any longer. No-one will ever judge you on the quality of that thesis, good or bad. All that matters is that it was good enough.

In one of the labs I worked in we had a thesis that did the rounds of the post-graduates who were writing up. You might think that they were sharing a particularly wonderful thesis so as to learn best practice and be inspired by the achievements of others. I’m sure all the supervisors would have preferred that. But no, the thesis everyone wanted to see was singularly atrocious. No-one reading it could fail to spot glaring errors, hideous formatting and some of the worst figures ever committed to print. That’s exactly why everyone was so keen to read it — if this person passed then surely there was hope for others!

I’m not going to reveal whose thesis it was, because that doesn’t matter. They have gone on to a successful academic career where they are respected in their field with an international profile. Does anyone care that they submitted a shoddy thesis? Of course not. It was good enough. On the other hand, the best thesis I ever read remains that by Mike Shanahan, who preceded me by a couple of years and even worked at the same desk. Nothing could be more demoralising than to witness a standard of writing to which I had no hope of aspiring (at the time). Perhaps he still looks with satisfaction upon that thesis. He might do so again if he reads this. My bet is that it hasn’t crossed his mind in a decade or more. Did it benefit his career? Maybe, but probably not that much.

There is an argument that a better thesis will lead to an easier viva, and that’s perhaps the case, but my suspicion is that the correlation is not strong. How a viva goes depends on the personality of the examiners, their particular bugbears, the wind direction and the alignment of the stars. You can no more predict the questions than you can anticipate how many corrections you’re likely to get. The time to be a perfectionist, or at least to aim for the highest standards you can, is when you’re preparing a manuscript for publication. Then you know it’s going to be pored over in great detail. A publication is your contribution to the legacy of science, a work that will be forever associated with you. The thesis? That’s a bookend.

The best advice I ever received while writing up was from another old hand in the group who told me that a thesis is never finished. Eventually you just relinquish it to the examiners. Bear this in mind if you’re tempted to read and reread chapters, add more references, or tinker endlessly with the figures. There’s always something else you could do. Just get it done, make sure it’s good enough, then move on to the rest of your career.


Edit: @ZarahPattison made an interesting point on Twitter about thesis by publication. Although this is arguably the best possible way to prepare a thesis, it’s not for everyone, and many universities don’t even allow it. I wouldn’t like to give any student the idea that it was an expectation, not least because I didn’t manage it myself. It’s certainly true that a well-written chapter is easier to turn into a manuscript, but that’s missing the point. If you want to write a manuscript, write a manuscript. If you have a manuscript then turning it into a chapter is easy. If you need to finish a thesis then get the chapters done and worry about the manuscripts later.

How to respond to referees’ comments

The first time I submitted a manuscript, it came back months later with a lengthy list of recommended changes and an equivocal response from the editor which implied that he was reluctant to hear from me again but might deign to respond if I proved myself worthy. I was devastated. It had been an immense amount of work and effort to prepare, and by now I’d moved on to other things. I glumly sloped into my supervisor’s office and was taken aback by his enthusiasm. Apparently this was what passed as good news in science *.

Since then I’ve been through the manuscript submission mill many times and always prepare my students in advance for the likely tone of what they will receive. It doesn’t get any easier. I still can’t read comments as soon as they arrive. Normally I’ll read what the editor says, skim the rest, then go for a short walk around the lake to calm down. Sometimes it takes several laps.

Eventually, however, you need to brace yourself and get down to the revisions. Clear your diary, close the door, unplug the phone (and the internet) and make sure there are no distractions. Don’t leave until it’s done. Unpleasant jobs are always the easiest ones to procrastinate from, and revising a manuscript comes pretty low on my list of favourite ways to spend an afternoon.

Assuming you have an invited resubmission (rather than an outright rejection **), here is a quick guide to how to respond to manuscript reviews that I wrote for my PhD students:

  1. Write back to the editor immediately, thanking them and the referees for their time and helpful comments. Even if you’re not grateful and they weren’t helpful. Even if they rejected the manuscript ***. Being nice works wonders in the long term because they will see your work again. They have also taken their own limited time, usually unpaid, to look at what you’ve submitted.
  2. Compose a response letter, starting in much the same way. List and address every single comment made by the editor and referees sequentially and in full. Keep in the positive ones too, it makes you feel better.
  3. Make it as easy as possible for the editor to tell that you’ve made the changes requested. This means that instead of saying ‘This has been done’, or ‘A paragraph on this has been added to the discussion’, say ‘This is a very helpful comment. We have therefore inserted a new paragraph in lines 283–292 which explains how…’ etc. Editors are busy and don’t like to have to work harder to check whether you’ve followed instructions.
  4. Tread carefully if you disagree with any comment. If it makes no material difference then make the change, even if it’s only a matter of preference. Only contest if you are convinced that the referee is wrong and you can back it up. Even so, apologise for not making the manuscript clear enough and specify where you have added clarifications or extra evidence in the text. If you’ve failed to convince them first time around then it implies that you need to change something.
  5. Try not to use track changes, comments, bold type or other formatting to note changes to the manuscript itself. In my experience (usually when requested to by editors…) this leads to errors in the final copy. Refer to line numbers instead.
  6. Take extra time on the figures. Clear, high-quality figures give your paper a greater chance of being read, cited and used by others. If the figures look amateur then no-one will bother reading the text. Use this opportunity to redraw and tweak them using proper tools (e.g. inkscape, sK1, ImageMagick, gimp). Don’t rely on Microsoft Office products to create publication-quality images.
  7. Never play referees off against each other. If they disagree on a point then compromise and ask the editor for guidance. Also note that if only one referee picks up on something, this does not imply that all the others are on your side. They may simply not have noticed.

Finally, in almost all cases reviewers are doing it because they genuinely care about maintaining standards in the scientific literature and improving the quality of work that gets published. There are some cases when a reviewer might block something too close to their own work, which contradicts them, or out of some personal vendetta against you or your collaborators. This is exceptionally rare though, and can seldom be demonstrated. Even if you suspect it, you’re most likely wrong, and should never say so in your response. No-one is out to thwart you.

Good luck, and remember, we all go through this. If it starts to get you down then go and vent to a colleague. Everyone has stories to share.


* A friend at a university in a developing country once related that the modal number of papers among his faculty colleagues was zero. Exploring the causes of this, it transpired that in many cases they had once submitted something to an international journal and been so offended by the audacity of the response that they had vowed to never subject themselves to such humiliation again. This was true of even senior professors.

** I would recommend doing all this even if you’ve been rejected. Partly because you have a high risk of coming across the same referees again at a different journal, but mainly because it forces you to confront the criticisms of your work.

*** Don’t contest a rejection unless one of two things apply. Either there has been a gross mistake made by one of the referees, and you can unequivocably demonstrate this. Or you’re submitting to one of the big journals (Nature, Science) when putting up a fight can make a difference. Apparently. It’s never worked for me.