Tag Archives: academic literature

Junk pedagogical research


Can I teach? I think so. Should I aspire to publish papers telling other people how to teach? Probably not. This is me showing students how to estimate tree height.

In my last job I was employed on a teaching-track position*. For many years this worked reasonably well for me. I enjoy teaching, I think I’m quite good at it, and I didn’t mind a slightly higher load as the price of not needing to satisfy arbitrary targets for research grant income or publication in high-impact journals. That’s not to say I stopped doing research, because obviously that didn’t happen, but I accepted that there was a trade-off between the two and that I was closer to one end of the spectrum. It still left me three clear months every summer to get out into the field and collect data.

Many UK universities developed teaching-track positions in response to the national research assessment exercise (the REF**) which incentivised them to concentrate resources in the hands of a smaller number of staff whilst ensuring that someone else got on with the unimportant business of running the university and the distraction of educating undergraduates. Such is the true meaning of research-led teaching.

A problem began to arise when those staff who had been shuffled into teaching-track positions applied for promotion. The conventional signifiers of academic success weren’t relevant; you could hardly expect them to bring in large grants, publish in top-tier journals or deliver keynotes at major conferences if they weren’t being given the time or support to do so.

Some head-scratching took place and alternative means were sought out to decide who was performing well. It’s hard enough to determine what quality teaching looks like at an institutional level***, and assessing individuals is correspondingly even more difficult.

The first thing to turn to is student evaluations. These largely measure how good a lecturer is at entertaining and pleasing their students, or how much the students enjoy the subject. Evidence suggests that evaluations are inversely proportional to the amount that students learn, as well as being biased against women and protected minorities. In short they’re not just the wrong measure, they’re actively regressive in their effects. Not that this stops many universities using them of course.

What else is there? Well, being academics, the natural form of output to aim for is publications. It’s the only currency some academics understand. Not scientific research papers, of course, because teaching staff aren’t supposed to be active researchers. So instead the expectation became that they would publish papers based on pedagogical research****. This sounds, on the face of it, quite sensible, which is why many universities went down that route. But there are three major problems.

1. Pedagogical research isn’t easy. There are whole fields of study, often based in departments of psychology, who have developed approaches and standards to ensure that work is of appropriate quality. Expecting an academic with a background in biochemistry or condensed matter physics to publish in a competitive journal of pedagogical research without the necessary training is unreasonable. Moreover, it’s an implicit insult to those colleagues for whom such work is their main focus. Demanding that all teachers should publish pedagogical research implies that anyone can do it. They can’t.

2. Very few academics follow pedagogical research. That’s not to say that they shouldn’t. Most academics teach and are genuinely interested in doing so as effectively as possible. But the simple truth is that it’s hard enough to keep track of the literature in our areas of research specialism. Not many can make time to add another, usually unrelated field to their reading list. I consider myself more engaged than most and even I encounter relevant studies only through social media or articles for a general readership.

3. A lot of pedagogical research is junk. Please don’t think I’m talking about the excellent, specialist work done by expert researchers into effective education practice. There is great work out there in internationally respected journals. I’m talking about the many unlisted, low-quality journals that have proliferated over recent years, and which give education research a bad name. Even if they contain some peer review process, many are effectively pay-to-publish, and some are actively predatory. I won’t name any here because that’s just asking for abusive e-mails.

Why to these weak journals exist? Well, we have created an incentive structure in which a class of academics needs to publish something — anything — in order to gain recognition and progress in their careers. A practice which we would frown upon in ‘normal’ research is actively encouraged by many of the world’s top universities. Junk journals and even junk conferences proliferate as a way to satisfy universities’ own internal contradictions.

What’s the alternative? I have three suggestions:

1. Stop imposing an expectation based on research onto educators. If research and teaching are to be separated (a trend I disagree with anyway) then they can’t continue to be judged by the same metrics. Incentivising publications for their own sake helps no-one. Some educators will of course want to carry out their own independent studies, and this should be encouraged and respected, but it isn’t the right approach for everyone.

2. Put some effort into finding out whether teachers are good at their job. This means peer assessments of teaching, student performance and effective innovation. All this is difficult and time-consuming but if we want to recognise good teachers then we need to take the time to do it properly. Proxy measures are no substitute. Whether someone can write a paper about teaching doesn’t imply that they can teach.

3. Support serious pedagogical researchers. If you’re based in a large university then there’s almost certainly a group of specialist researchers already there. How much have you heard about their work? Have you collaborated with them? Universities have native expertise which could be used to improve teaching practice, usually much more efficiently than forcing non-specialists to jump through hoops. If the objective is genuinely to improve teaching standards then ask the people who know how to do it.

If there’s one thing that shows how evaluations of teaching aren’t working or taken seriously it’s that universities don’t make high-level appointments based on teaching. Prestige chairs exist to hire big-hitters in research based on their international profile, grant income and publication record. When was the last time you heard of a university recruiting a senior professor because they were great at teaching? Tell me once you’ve stopped laughing.



* This is now relatively common among universities in Europe and North America. The basic principle is that some staff are given workloads that allow them to carry out research, whilst others are given heavier teaching and administrative loads but the expectations for their research income and outputs are correspondingly reduced.

** If you don’t know about the Research Evaluation Framework and how it has poisoned academic life in the UK then don’t ask. Reactions from those involved may vary from gentle sobs to inchoate screaming.

*** Which gave rise to the Teaching Evaluation Framework, or TEF, and yet more anguish for UK academics. Because the obvious way to deal with the distorting effect of one ranking system is to create another. Surely that’s enough assessment of universities based on flawed data? No, of course not, because there’s also the Knowledge Evaluation Framework (KEF) coming up. I’m not even joking.

**** Oddly textbooks often don’t count. No, I can’t explain this. But I was told that publishing a textbook didn’t count as scholarship in education.


Books I haven’t read

V.D10 Origin 1st edn title page_0

The opening pages of Darwin’s classic text in its first edition as held by the library of St John’s College, Cambridge.

A number of years ago on a UK radio show there was a flurry of attention when Richard Dawkins, under pressure from a religious interviewer, was unable to recall the full title of Darwin’s most famous book*. This was perceived as a flaw in his authority as an evolutionary biologist. How could he claim to support evolution if he couldn’t even name the book which launched the theory?

There was a prompt backlash to this line of argument from scientists who pointed out that we don’t have sacred texts in science. Unlike religions which fixate upon a single original source**, we recognise those who made contributions to the development of our field but don’t treat them as inviolable truth. Darwin, like all scientists, got some things wrong, didn’t quite manage to figure out some other problems, and occasionally changed his mind. None of this undermines his brilliance; the overwhelming majority of his ideas have stood the test of time, and given the resources and knowledge he had available to him (remembering that it was another century until we understood the structure of DNA), his achievement was astonishing.

Confession time: I haven’t read On the Origin. Maybe I will one day, but right now it’s not on my very long reading list.

There are many good reasons for reading On the Origin, none of which I need to be told. By all accounts it’s a fascinating, well-written and detailed argument from first principles for the centrality of natural selection in evolution. As a historical document and inspiration for the entire field of biology its importance is unquestionable. I’m certain that Richard Dawkins has read it, even if he didn’t memorise the title.

None of this means that I have to read it. The fundamental insight has been affirmed, repeated and strengthened by over 150 years of scientific study and publication. Even though I used to be a creationist, it didn’t take reading Darwin to change my mind***. What we know now makes a modern account more convincing than a Victorian naturalist could ever have managed.

An even more embarrassing confession is that I haven’t read The Theory of Island Biogeography****. This admission is likely to provoke horror in anyone from that generation of ecologists (my own lecturers) who remember the seismic impact that MacArthur & Wilson’s 1967 book had on the field. It defined the direction of enquiry in many areas of ecology for decades afterwards and effectively founded the scientific discipline of conservation biology. Some of their ideas turned out to be flawed, but the majority of ecologists still view the central model as effectively proven.

I’m not one of them. Yes, I apparently fall among the minority of ecologists, albeit led by some pretty influential voices, who view the model as so partial and incomplete as to lack predictive value in the real world (I’m not going to lay out my argument here, I’ve done that before). That I’ve reached this decision without reading the original book doesn’t perturb me in the slightest. In the same way as I’m confident that I can understand evolutionary theory without reading Darwin, I’ve read enough accounts of the Equilibrium Model of Island Biogeography (and taught it to undergraduates) that it’s not as if going back to the original source will change my mind.

If this upsets you then consider whether you’re happy to agree with the majority of evolutionary biologists that Lamarck’s model of inheritance was wrong without without bothering to read Lamarck (or his later advocate Lysenko). Lamarck made many great contributions to science; this wasn’t among them. For similar reasons I’m happy to make judgements on Haeckel’s embryological model of evolution (rejected), Wegener’s theory of plate tectonics (accepted), or Hubbell’s neutral theory (ambivalent), all without reading the original books.

What have I actually read then? Among the great classics of our field I’m pleased to have gone through a large number of Wallace’s original works, which were contemporaneous to Darwin, and Humboldt’s Essay on the Geography of Plants (1807). I can strongly recommend them. But they didn’t change my mind about anything. It was enjoyable to go back to the original sources, and by the end I was even more impressed by the authors’ achievements than before, but my understanding of the world remained unaltered. For that reason I wouldn’t ever claim that everyone should read them.

There are, however, a number of books which have changed my mind or radically reorganised my understanding of the world. These include Chase & Leibold’s 2003 book about niches or Whittaker & Fernandez-Palacios on islands. Without having read them I wouldn’t hold the opinions that I do today. I’m glad that I placed those higher on my reading list than On the Origin. But that certainly doesn’t make them essential reading for everyone.

We all follow our own intellectual journeys through science and there is no one true path. For this reason I’m always sceptical of attempts to set essential reading lists, such as the 100 papers every ecologist needs to read, which I and others disagreed with more on principle than content. So yes, if you like, you can think less of me for the reading that I haven’t done. But my guess is that many people who read this post will be feeling a quiet reassurance that it’s not just them, and that it’s nothing to be ashamed about.


* It is, of course, the barely memorable “On the Origin of Species by Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life.”

** This in itself is baffling given that sacred texts have their own complex histories of assembly from multiple sources. Most modern Christians don’t dwell on the fact that the issue of which books to include in the Bible was so contentious, especially for the Old Testament, and some traditions persist with quite different Bibles. Why include Daniel but not Enoch? Then there’s deciding which version should be seen as definitive, and whose translation… it’s not as simple as picking the one true book.

*** Notably a dominant theme in creationist critiques of evolution is to pick away at perceived errors or inconsistencies in Darwin’s writings on the assumption that undermining its originator will unravel the whole enterprise of modern biology.

**** And this from a former book reviews editor of the journal Frontiers of Biogeography. They’ll be throwing me out of the Irritable Biogeography Society next.


What academic journals should I follow?


Yay, more journal issues have arrived! I’ll add them to the heap. It’s becoming a fire hazard.

A few years ago I bemoaned the fact that I had effectively stopped reading the academic literature. Despite apparently being a common phenomenon among mid-career academics, at least based on my conversations with colleagues, it provoked a nagging guilt. How can we tell our students to read constantly if we don’t practice what we preach?

Over the years the table of contents emails continued to pile up, causing permanent low-level stress as I realised how much interesting, relevant and important science was simply passing me by. But there was no time to do anything about it, nor would there ever be. With a heavy heart I deleted them all. This has, in effect, blinded me to several years of output in almost all of the journals that I used to follow*.

That’s not to say I haven’t been reading any papers. Every time I need to write a manuscript, proposal or lecture, I’ve carried out a targeted search and found what I needed to get the job done. This is a limited way to learn about science though; it doesn’t expose you to as many new ideas. I was raiding the literature, not reading it.

I’ve now come up with a new system based on the principle that it’s better to do a small amount well than attempt too much and fail. This involves selecting ten journals for which my aim is to scan the contents for every issue, and read the papers that are most compelling. They make up my ‘essential’ list. Next are a set of ten for which I will scan them if I have time, but if the next issue comes out before I’ve had a chance, they’ll be ignored. This means I will only follow a maximum of 20 journals at any given time**.

Essential: Science, Nature, PNAS, PRSB, Nature Ecology & Evolution, Ecology Letters, Ecology, Journal of Ecology, Ecological Monographs, TREE.

Time-permitting: American Naturalist, Nature Communications, Methods in Ecology & Evolution, Frontiers in Ecology and the Environment, PPEES, Forest Ecology and Management, J Veg Sci, Biotropica, GEBJournal of Biogeography.

I could easily list another ten, or twenty, that I would love to read if there were room in my life, but there isn’t. It’s been a tough decision-making process. If you’ve tried something similar, then what did you end up with? How did you decide? If anyone is interested then my rationale for selection is below the fold.

I’ve focussed here on how to keep pace with new literature. It doesn’t even mention other issues such as the value of reading older papers, reading outside your own narrow field of study, or whether sometimes it’s best not to read at all. Some people will even argue that the whole concept of journals is becoming obselete, and in a world of online search engines we no longer need them as anything other than gatekeepers. I have some sympathy for this view, but the Brave New World has yet to arrive, so I’m making use of the system we have.

Continue reading

Why I stopped reading the literature

This year I stopped reading the academic literature. Not entirely, of course — that would be career suicide. Nor is this a deliberately awkward response to the latest hashtag tyranny of #365papers, where fellow academics post how many papers they’ve read either to impress others or make them feel guilty. Mine was an accident that has settled into a default state.

For the last decade I have been able to claim with confidence that I read roughly 1000 papers a year. Now when I say read, you should be given to understand that this doesn’t mean poring over every single word. The normal protocol is to read the abstract, skim the introduction, flick through the figures then read the discussion until it gets boring*. If there’s anything that needs further scrutiny then I’ll look more closely, but it’s rare that the methods will receive more than cursory attention, perhaps checking for a few key words or standard techniques. I think most academics would say that in practice this is how they read papers.

By the end of the week I’m not mentally capable of intellectually-demanding work like writing manuscripts or analysing data, unless the pressure of a deadline forces me into it. So I’ve tended to hold Friday afternoons as a drop-in time for my group, and spent the gaps between meetings looking through recent journal issues and reading papers. This has helped me keep up to date with novel ideas, exposed me to new studies, and honed my awareness of what types of things are getting published.

My pattern of work all changed in the last academic year because I was inflicted with a new module with sessions scheduled in the Friday afternoon slot. No-one wants that time, least of all the students. It’s perhaps only marginally less unpopular than 9am on a Monday morning. Who wants to be in a lecture when there are pubs to go to? (I mean on Fridays, not 9am on Mondays. We’re not all alcoholics in the UK.)

My journal alerts system (I use Zetoc) has build up over the years to incorporate a wide array of sources. There are tables of contents for particular journals, search terms for the fields that I specialise in, and even a few names of colleagues whose work particularly interests me. I’m lucky enough to not need to keep track of competitors because I work in a field that no-one cares about so there’s little risk of being scooped**. At this moment the total number of unread alerts is about to pass 300. Catching up on all of those has reached the point where it’s simply impossible, unless I take a few weeks’ holiday and spend the whole time on academic reading. Which I’m not going to do.

When I was a (more) junior academic I remember being told by (more) senior academics that they didn’t read the literature any more. This struck me as a great pity. One phrase that I heard second-hand, supposedly from Chris Thomas, was that he no longer reads the literature — he raids it. If you’re writing a manuscript and need a reference to make a specific point then you go looking for an appropriate paper rather than attempting to follow everything. Another colleague told me that he expects his group to be his eyes into the literature, and relies on them to spot important new publications, which he gleans from their manuscripts and recycles into the next grant proposal.

With mixed feelings I’ve realised that I’m now headed in the same direction. I’m coming to terms with the idea that, in many cases, my graduate students have a firmer grasp of the frontline of the field than I do. Perhaps this isn’t such a bad thing. Over the last few years while writing a textbook it’s been necessary for me to keep on top of the literature to make sure I’m up-to-date. When covering so many subjects at once this is an overwhelming task. Delivering the final copy to the publishers removed the ongoing pressure to read and read more. But why was that process not fun? How can someone who loves his research and is passionate about his field not unequivocally enjoy the process of reading and discovering more about it?

A clue comes from a Masters-level class on science writing that I’ve just finished. This year I introduced a new exercise: the students were asked to come along with a piece of writing that they enjoyed reading. This could be anything at all — a book, website, magazine, paper — so long as it was in prose. Out of a class of 35, only one brought an article from a scientific journal. There were a handful with popular science books (Dawkins, E.O. Wilson), but the overwhelming majority arrived carrying fiction books.

What does this tell us? A small sample size, I know, but at least it’s an indication. These keen and bright students, at a top university***, immersed in the scientific literature, don’t first think of an academic paper when they’re asked about the most enjoyable things they read. This is probably because, for the most part, academic writing is terrible. Not many people would choose to read it for fun in their spare time. I read constantly at home — but the pile of papers in the corner isn’t the first thing I reach for.

The purpose of our class exercise was to look at the structure of enjoyable writing and see whether there are lessons that can be learnt for our own work. The pointers were perhaps predictable but nonetheless helpful: shorter sentences, simpler words, a focus on engaging rather than impressing the reader. My hope is that one day some of these students go on to produce a higher quality of scientific prose than the general average. Perhaps, in our small ways, we can redirect the tenor of academic writing and make it more pleasurable to read. Who knows, it might get me reading again.

* They all do, even mine. It’s the point where the author switches from actually discussing the results and their implications, and moves on to tenuous speculation or unnecessary criticism of other people’s work.

** This isn’t quite true on two counts. Firstly, there are plenty of people working on spatial self-organisation in natural systems. My experience, however, is that they’re (almost) all nice, supportive and collegiate people who encourage one another. I’ve never got the impression that there’s any competition. The other reason why scooping isn’t so much of a risk is that in ecology, data is king. No-one is going to beat me to publishing papers on Kamchatkan forest organisation because I’m pretty sure that no-one else has those kinds of data.

*** That’s what we’d like to believe, anyway. We do pretty well in some league tables but aren’t as impressive in others. Mostly we end up in the global top 100 and the UK top 20.

Consult the index

I’m presently mired in what is one of the most tiresome, tedious tasks I’ve had to perform in my academic career. Bear in mind that I say this as someone who spent three years tracking levels of herbivore damage on 20 000 individual leaves as part of my PhD. I’ve counted pollen. I’ve catalogued herbarium specimens. This is an order of magnitude worse.

The task at hand is to produce an index for my textbook, Natural Systems: The Organisation of Life*, which is finally due to be published in March 2016. I knew that indexing would be hard. I didn’t quite appreciate how hard. And that’ while using LaTeX, which makes everything much more straightforward. I can’t even imagine having to do this in hard copy or (shudder) in Microsoft Word**.  There are some useful guides to indexing. There’s even a book called Index Your Book Fast, though one suspects that the time taken in reading it would more than offset any gains. None of them make it any easier.

While it’s not difficult to imagine an ideal index in abstract terms, actually putting one together is trickier. I’m currently working through the book sentence-by-sentence, deciding whether this or that term is a passing or substantive mention, whether it needs to be nested within other groups, and when I might ever finish. Who or what deserves a place in the index? Main concepts are obviously in. What about taxa, important people, study sites, species… where does it end?

As a book reviews editor myself (for Frontiers of Biogeography) I’m acutely aware of that typical complaint by reviewers that ‘subject X doesn’t even make it into the index!’ This could mean any number of things: that the subject isn’t covered by the book, that the index has omitted to mention it, or that the reviewer hasn’t read the book properly. A skim of the index is often one of the first things a prospective purchaser does while browsing and forms a central element of the impression a book makes. Getting it right is crucial because it makes a book more useful to future readers. Too long or trivial and it’s overwhelming; too short and it looks skimpy.

One might ask why I’ve bothered writing a blog post about a topic so dull as indexing (although if you’re finding this particularly fascinating then you should read The Indexer, the international journal of indexing). In part it’s as a corrective to recent posts which may have given the false impression of my life as one of tropical jaunts spent being pursued by dangerous animals. All that happens, but actually 9 months of my year is spent in front of a computer screen. I’m also keen that you realise, when you turn to the back of a book and flick through the index, that a surprising amount of work has gone into preparing it. And, in my case, a surprising amount of wine.

* The blurb on this site is a cut-and-paste from the original proposal, submitted three years ago, and doesn’t really capture the book content. The cover image is also under review right now. All this will be filled in over the next couple of months.

** I haven’t used Word in several years, and it’s made my life immeasurably happier. You could do the same.

How to respond to referees’ comments

The first time I submitted a manuscript, it came back months later with a lengthy list of recommended changes and an equivocal response from the editor which implied that he was reluctant to hear from me again but might deign to respond if I proved myself worthy. I was devastated. It had been an immense amount of work and effort to prepare, and by now I’d moved on to other things. I glumly sloped into my supervisor’s office and was taken aback by his enthusiasm. Apparently this was what passed as good news in science *.

Since then I’ve been through the manuscript submission mill many times and always prepare my students in advance for the likely tone of what they will receive. It doesn’t get any easier. I still can’t read comments as soon as they arrive. Normally I’ll read what the editor says, skim the rest, then go for a short walk around the lake to calm down. Sometimes it takes several laps.

Eventually, however, you need to brace yourself and get down to the revisions. Clear your diary, close the door, unplug the phone (and the internet) and make sure there are no distractions. Don’t leave until it’s done. Unpleasant jobs are always the easiest ones to procrastinate from, and revising a manuscript comes pretty low on my list of favourite ways to spend an afternoon.

Assuming you have an invited resubmission (rather than an outright rejection **), here is a quick guide to how to respond to manuscript reviews that I wrote for my PhD students:

  1. Write back to the editor immediately, thanking them and the referees for their time and helpful comments. Even if you’re not grateful and they weren’t helpful. Even if they rejected the manuscript ***. Being nice works wonders in the long term because they will see your work again. They have also taken their own limited time, usually unpaid, to look at what you’ve submitted.
  2. Compose a response letter, starting in much the same way. List and address every single comment made by the editor and referees sequentially and in full. Keep in the positive ones too, it makes you feel better.
  3. Make it as easy as possible for the editor to tell that you’ve made the changes requested. This means that instead of saying ‘This has been done’, or ‘A paragraph on this has been added to the discussion’, say ‘This is a very helpful comment. We have therefore inserted a new paragraph in lines 283–292 which explains how…’ etc. Editors are busy and don’t like to have to work harder to check whether you’ve followed instructions.
  4. Tread carefully if you disagree with any comment. If it makes no material difference then make the change, even if it’s only a matter of preference. Only contest if you are convinced that the referee is wrong and you can back it up. Even so, apologise for not making the manuscript clear enough and specify where you have added clarifications or extra evidence in the text. If you’ve failed to convince them first time around then it implies that you need to change something.
  5. Try not to use track changes, comments, bold type or other formatting to note changes to the manuscript itself. In my experience (usually when requested to by editors…) this leads to errors in the final copy. Refer to line numbers instead.
  6. Take extra time on the figures. Clear, high-quality figures give your paper a greater chance of being read, cited and used by others. If the figures look amateur then no-one will bother reading the text. Use this opportunity to redraw and tweak them using proper tools (e.g. inkscape, sK1, ImageMagick, gimp). Don’t rely on Microsoft Office products to create publication-quality images.
  7. Never play referees off against each other. If they disagree on a point then compromise and ask the editor for guidance. Also note that if only one referee picks up on something, this does not imply that all the others are on your side. They may simply not have noticed.

Finally, in almost all cases reviewers are doing it because they genuinely care about maintaining standards in the scientific literature and improving the quality of work that gets published. There are some cases when a reviewer might block something too close to their own work, which contradicts them, or out of some personal vendetta against you or your collaborators. This is exceptionally rare though, and can seldom be demonstrated. Even if you suspect it, you’re most likely wrong, and should never say so in your response. No-one is out to thwart you.

Good luck, and remember, we all go through this. If it starts to get you down then go and vent to a colleague. Everyone has stories to share.


* A friend at a university in a developing country once related that the modal number of papers among his faculty colleagues was zero. Exploring the causes of this, it transpired that in many cases they had once submitted something to an international journal and been so offended by the audacity of the response that they had vowed to never subject themselves to such humiliation again. This was true of even senior professors.

** I would recommend doing all this even if you’ve been rejected. Partly because you have a high risk of coming across the same referees again at a different journal, but mainly because it forces you to confront the criticisms of your work.

*** Don’t contest a rejection unless one of two things apply. Either there has been a gross mistake made by one of the referees, and you can unequivocably demonstrate this. Or you’re submitting to one of the big journals (Nature, Science) when putting up a fight can make a difference. Apparently. It’s never worked for me.