Author Archives: Markus Eichhorn

About Markus Eichhorn

I'm a lecturer in ecology at University College Cork. My research studies how patterns of trees in forests form, and how the organisation of forests influences the things that live inside them. Hence trees in space!

Junk pedagogical research

stickoftruth

Can I teach? I think so. Should I aspire to publish papers telling other people how to teach? Probably not. This is me showing students how to estimate tree height.

In my last job I was employed on a teaching-track position*. For many years this worked reasonably well for me. I enjoy teaching, I think I’m quite good at it, and I didn’t mind a slightly higher load as the price of not needing to satisfy arbitrary targets for research grant income or publication in high-impact journals. That’s not to say I stopped doing research, because obviously that didn’t happen, but I accepted that there was a trade-off between the two and that I was closer to one end of the spectrum. It still left me three clear months every summer to get out into the field and collect data.

Many UK universities developed teaching-track positions in response to the national research assessment exercise (the REF**) which incentivised them to concentrate resources in the hands of a smaller number of staff whilst ensuring that someone else got on with the unimportant business of running the university and the distraction of educating undergraduates. Such is the true meaning of research-led teaching.

A problem began to arise when those staff who had been shuffled into teaching-track positions applied for promotion. The conventional signifiers of academic success weren’t relevant; you could hardly expect them to bring in large grants, publish in top-tier journals or deliver keynotes at major conferences if they weren’t being given the time or support to do so.

Some head-scratching took place and alternative means were sought out to decide who was performing well. It’s hard enough to determine what quality teaching looks like at an institutional level***, and assessing individuals is correspondingly even more difficult.

The first thing to turn to is student evaluations. These largely measure how good a lecturer is at entertaining and pleasing their students, or how much the students enjoy the subject. Evidence suggests that evaluations are inversely proportional to the amount that students learn, as well as being biased against women and protected minorities. In short they’re not just the wrong measure, they’re actively regressive in their effects. Not that this stops many universities using them of course.

What else is there? Well, being academics, the natural form of output to aim for is publications. It’s the only currency some academics understand. Not scientific research papers, of course, because teaching staff aren’t supposed to be active researchers. So instead the expectation became that they would publish papers based on pedagogical research****. This sounds, on the face of it, quite sensible, which is why many universities went down that route. But there are three major problems.

1. Pedagogical research isn’t easy. There are whole fields of study, often based in departments of psychology, who have developed approaches and standards to ensure that work is of appropriate quality. Expecting an academic with a background in biochemistry or condensed matter physics to publish in a competitive journal of pedagogical research without the necessary training is unreasonable. Moreover, it’s an implicit insult to those colleagues for whom such work is their main focus. Demanding that all teachers should publish pedagogical research implies that anyone can do it. They can’t.

2. Very few academics follow pedagogical research. That’s not to say that they shouldn’t. Most academics teach and are genuinely interested in doing so as effectively as possible. But the simple truth is that it’s hard enough to keep track of the literature in our areas of research specialism. Not many can make time to add another, usually unrelated field to their reading list. I consider myself more engaged than most and even I encounter relevant studies only through social media or articles for a general readership.

3. A lot of pedagogical research is junk. Please don’t think I’m talking about the excellent, specialist work done by expert researchers into effective education practice. There is great work out there in internationally respected journals. I’m talking about the many unlisted, low-quality journals that have proliferated over recent years, and which give education research a bad name. Even if they contain some peer review process, many are effectively pay-to-publish, and some are actively predatory. I won’t name any here because that’s just asking for abusive e-mails.

Why to these weak journals exist? Well, we have created an incentive structure in which a class of academics needs to publish something — anything — in order to gain recognition and progress in their careers. A practice which we would frown upon in ‘normal’ research is actively encouraged by many of the world’s top universities. Junk journals and even junk conferences proliferate as a way to satisfy universities’ own internal contradictions.

What’s the alternative? I have three suggestions:

1. Stop imposing an expectation based on research onto educators. If research and teaching are to be separated (a trend I disagree with anyway) then they can’t continue to be judged by the same metrics. Incentivising publications for their own sake helps no-one. Some educators will of course want to carry out their own independent studies, and this should be encouraged and respected, but it isn’t the right approach for everyone.

2. Put some effort into finding out whether teachers are good at their job. This means peer assessments of teaching, student performance and effective innovation. All this is difficult and time-consuming but if we want to recognise good teachers then we need to take the time to do it properly. Proxy measures are no substitute. Whether someone can write a paper about teaching doesn’t imply that they can teach.

3. Support serious pedagogical researchers. If you’re based in a large university then there’s almost certainly a group of specialist researchers already there. How much have you heard about their work? Have you collaborated with them? Universities have native expertise which could be used to improve teaching practice, usually much more efficiently than forcing non-specialists to jump through hoops. If the objective is genuinely to improve teaching standards then ask the people who know how to do it.

If there’s one thing that shows how evaluations of teaching aren’t working or taken seriously it’s that universities don’t make high-level appointments based on teaching. Prestige chairs exist to hire big-hitters in research based on their international profile, grant income and publication record. When was the last time you heard of a university recruiting a senior professor because they were great at teaching? Tell me once you’ve stopped laughing.

 


 

* This is now relatively common among universities in Europe and North America. The basic principle is that some staff are given workloads that allow them to carry out research, whilst others are given heavier teaching and administrative loads but the expectations for their research income and outputs are correspondingly reduced.

** If you don’t know about the Research Evaluation Framework and how it has poisoned academic life in the UK then don’t ask. Reactions from those involved may vary from gentle sobs to inchoate screaming.

*** Which gave rise to the Teaching Evaluation Framework, or TEF, and yet more anguish for UK academics. Because the obvious way to deal with the distorting effect of one ranking system is to create another. Surely that’s enough assessment of universities based on flawed data? No, of course not, because there’s also the Knowledge Evaluation Framework (KEF) coming up. I’m not even joking.

**** Oddly textbooks often don’t count. No, I can’t explain this. But I was told that publishing a textbook didn’t count as scholarship in education.

From tiny acorns

My father planted acorns.

This is one of those recollections that arrives many years after the fact and suddenly strikes me as having been unusual. As a child, however, it seemed perfectly normal that we should go out collecting acorns in the autumn. Compared to my father’s other eccentric habits and hobbies, of which there were many*, gathering acorns didn’t appear to be particularly strange or worthy of note.

In our village during mast years the acorns would rain down from boundary and hedgerow oak trees and sprout in dense carpets along the roadside. This brief flourishing was inevitably curtailed by the arrival of Frankie Ball, the local contractor responsible for mowing the verges. His indiscriminate treatment shredded a summer’s worth of growth and ensured that no seedlings could ever survive.

sprouting_acorn

Sprouting acorn (Quercus robur L.) by Amphis.

Enlightened modern opinion would now declare that mowing roadside verges is ecologically damaging; it removes numerous late-flowering plants and destroys potential habitats for over-wintering insects. I’m not going to pass such judgement here though because it was a purely practical decision. Too much growth would result in blocked ditches, eventually flooding fields and properties. Frankie was just doing his job.

My father, however, couldn’t allow himself to see so many potential oak trees perish. His own grandfather had been a prominent forester back in the old country (one of the smaller European principalities that no longer exists), and with a family name like Eichhorn it’s hard not to feel somehow connected to little oak trees. He took it upon himself to save as many of them as he could.

And so it was that we found ourselves, trowels in hand, digging up sprouting acorns from the roadsides and transporting them by wheelbarrow to the wood on Jackson’s farm.  Here they would be gently transplanted into locations that looked promising and revisited periodically to check on their progress. Over the years this involved at least hundreds of little acorns, perhaps thousands.

They all died. This isn’t too surprising: most offspring of most organisms die before they reach adulthood. Trees have a particularly low rate of conversion of seedlings to adults, probably less than one in a thousand. That’s just one of the fundamental facts of life and a driving force of evolution. Why though did my father’s experiment have such a low success rate? He’d apparently done everything right, even choosing to plant them somewhere other trees had succeeded before**. It’s only after becoming a forest ecologist myself that I can look back and see where he was going wrong.

First, oak trees are among a class of species that we refer to as long-lived pioneers. This group of species is unusual because most pioneers are short-lived. Pioneers typically arrive in open or disturbed habitats, grow quickly, then reproduce and die before more competitive species can drive them out. Weeds are the most obvious cases among plants, but if you’re looking at trees then something like a birch would be the closest comparison.

Oaks are a little different. Their seedlings require open areas with lots of light to grow, which means that they don’t survive well below a dark forest canopy. Having managed to achieve a reasonable stature, however, they stick around for many centuries and are hard to budge. In ecology we know this as the inhibition model of succession. Oaks are great at building forests but not so good at taking them over.

The next problem is that oak seedlings do particularly badly when in the vicinity of other adult oak trees. This is because the pests and diseases associated with large trees quickly transfer themselves to the juveniles. An adult tree might be able to tolerate losing some of its leaves to a herbivore but for a seedling with few resources this can be devastating. This set of forces led to the Janzen-Connell hypothesis which predicts that any single tree species will be prevented from filling a habitat because natural enemies ensure that surviving adults end up being spread apart. A similar pattern can arise because non-oak trees provide a refuge for oak seedlings. Whatever the specific causes, oak seedlings suffer when planted close to existing oaks.

This makes it seem a little peculiar that acorns usually fall so close to their parent trees. The reason acorns are such large nuts*** is that they want to attract animals which will try to move and store them over winter. This strategy works because no matter how many actually get eaten, a large proportion of cached acorns remain unused (either they’re forgotten or the animal that placed them dies) and so they are in prime position to grow the following spring. Being edible and a desirable commodity is actually in the interests of the tree.

jay

A Eurasian jay, Garrulus glandarius. Image credit: Luc Viatour.

Contrary to most expectations, squirrels turn out to be pretty poor dispersers of acorns. Although they move acorns around and bury them nicely, they don’t put them in places where they are likely to survive well. Jays are much better, moving oaks long distances and burying single acorns in scrubby areas where the new seedlings will receive a reasonable amount of light along with some protection from browsing herbivores. My father’s plantings failed mainly because he wasn’t thinking like a jay.

My father’s efforts weren’t all in vain. The care shown to trees and an experimental approach to understanding where they could grow lodged themselves in my developing mind and no doubt formed part of the inspiration that led me to where I am today****. From tiny acorns, as they say.

 


 

* His lifelong passion is flying, which at various points included building his own plane in the garden shed and flying hang-gliders. It took me a while to realise that not everyone’s father was like this.

** One possible explanation we can rule out is browsing by deer, which often clear vegetation from the ground layer of woodlands. Occasional escaped dairy cows were more of a risk in this particular wood.

*** Yes, botanically speaking they are nuts, which means a hard indehiscent (non-splitting) shell containing a large edible seed. Lots of things that we call nuts aren’t actually nuts. This is one of those quirks of terminology that gives botanists a bad name.

**** Although I’m very sceptical of teleological narratives of how academics came to choose their areas of study.

If you can’t sketch a figure then you don’t have a hypothesis

herbivory

Caterpillar feeding on Urena lobata leaf in Pakke Tiger Reserve, Arunachal Pradesh, India. What is the response by the plant? Photo by Rohit Naniwadekar (CC BY-SA).

This is one of my mantras that gets repeated time and again to students. Sometimes it feels like a catch-phrase, one of a small number of formulaic pronouncements that come out when you pull the string on my back. Why do I keep saying it, and why should anyone pay attention?

Let me give an example*. Imagine you’re setting up an experiment into how leaf damage by herbivores influences chemical defence in plant tissues. You might start by assuming that the more damage you inflict, the greater the response induced in the plant. This sounds perfectly sensible. So are you ready to get going and launch an experiment? No, absolutely not. First let’s turn your rather flimsy hypothesis into an actual expectation.

gr1

Why is this a straight line? Because you’ve not specified any other shape for the relationship. A straight line has to be the starting point, and if you collect the data and plug it directly into a GLM without thinking about it, this is exactly what you’re assuming. But is there any reason why it should be a straight line? All sorts of other relationships are possible. It could be a saturating curve, where the plant reaches some maximum response; this is more likely than a continuous increase. Alternatives include an accelerating response, or a step change at a certain level of damage.

gr2

 

Perhaps what you need to do is take a step back and think about that x-axis. What levels of damage are you measuring, and why? If you expect an asymptotic response then you’re going to need to sample quite a wide range of damage levels, but there’s no need to continue to extremes because after a certain point nothing more is going to happen. If it’s a step change then your whole design should concentrate on sampling intensively around the point where the shift occurs so as to identify that parameter accurately. And so on. Drawing this first sketch has already forced you to think more carefully about your experimental design.

Let’s not stop there though. Look at the y-axis, which rather blandly promises to measure the induced defence response. This isn’t an easy thing to pinpoint though. Presumably you don’t expect an immediate response; it will take time for the plant to metabolise and mobilise its chemical defences. How long will they take to reach their maximum point? After that it’s unlikely that the plant will maintain unnecessarily high levels of defences once the threat of damage recedes. Over time the response might therefore be humped.  Will defences increase and decrease at the same rate?

gr3

This then turns into a whole new set of questions for your experimental design. How many sample points do you need to characterise the response of a single plant? What is your actual response variable: the maximum level of defences measured? When does this occur? What if the maximum doesn’t vary between plants but instead the rate of response increases with damage?

This thought process can feel like a step backwards. You started with an idea and were all set to launch into an experiment, but I’ve stopped you and riddled the plan with doubt. That uncertainty was already embedded in the design though, in the form of unrecognised assumptions. In all likelihood these would come back to bite you at a later date. If you were lucky you might spot them while you were collecting data. More likely you would only realise once you came to analyse and write things up**.

This is why I advocate drawing your dream figure right at the outset. Not just before you start analysing the data, but before you even begin collecting it. The principle applies to experimental data, field sampling, even to computer simulations. If you can’t sketch out what you expect to find then you don’t know what you’re doing, and that needs to be resolved before going any further***.

If you’re in the position where you genuinely don’t know the answers to the types of questions above then there are three possible solutions:

  • Read the literature, looking for theoretical predictions that match your system and give you something to aim for. Even if the theory doesn’t end up being supported, you’ve still conducted a valid test.
  • Look at previous studies and see what they found. Note that this isn’t a substitute for a good theoretical prediction; “Author (Date) found this therefore I expect to find it too” is a really bad way to start an investigation. More important is to see why they found what they did and use that insight to inform your own study.
  • Invest some time in preliminary investigations. You still have to avoid the circularity of saying “I found this in preliminary studies and therefore expected to find it again”. If you genuinely don’t know what’s going to happen then try, find out, and think about a robust predictive theory that might account for your observations. Then test that theory in a full and properly-designed experiment.

Scientists are all impatient. Sitting around dreaming about what we hope will happen can sound like an indulgence when there’s the real work of measurement to be done. But specifying exactly what you expect will greatly increase your chances of eventually finding it.

 


 

* This is not an entirely random example. I set up a very similar experiment to this in my PhD which consumed at least a month’s effort in the field. You’ll also find that none of the data are published, nor even feature in my thesis, because I found absolutely nothing. This post is in part an explanation of why.

** There are two types of research students. There are those who realise all-too-late that there were critical design flaws in some of their experiments. The rest are liars.

*** Someone will no doubt ask “what about if you don’t know at all what will happen”. In that case I would query why you’re doing it in the first place. So-called ‘blue skies research’ is never entirely blind but begins with a reasonable expectation of finding something. That might include several possible outcomes that can be predicted then tested against one another. I would argue that truly surprising discoveries arise through serendipity while looking for something else. If you really don’t know what might happen then stop, put the sharp things down and go and ask a grown-up for advice first.

 

Kratom: when ethnobotany goes wrong

Mitragyna_speciosa111

Mitragyna speciosa (Korth.) Havil., otherwise known as kratom. Image credit: Uomo vitruviano

Efforts to control the trade and usage of recreational drugs* struggle against human ingenuity, driven by our boundless determination to get loaded. The search for new legal highs has led in two directions. One is the generation of new synthetic drugs which are sufficiently chemically distinct to avoid the regulations but which remain pharmacologically effective. These are often more dangerous than the illegal drugs they replace. The other is to mine the accumulated cultural and medical repositories of herbal lore from around the world to find psychoactive plants which haven’t yet been banned. Species are suddenly raised from obscurity to become the latest rush.

Over recent years a variety of plants have gone through a process of initial global popularity followed by a clamp-down, usually once a death has been associated with their use or abuse (even indirectly). A wave of media attention, hysteria and misinformation typically leads to regulatory action long before the formal medical evidence begins to emerge. One of the most recent episodes was over Salvia divinorum which enjoyed brief popularity as the legal hallucinogen of choice among students, despite being inconsistent in its effects and often quite unpleasant. Wherever you’re reading this, it’s probably already been banned.

Most of these plants have a long history of safe and moderate intake by indigenous populations in the regions where they grow naturally, either for ritual or medical purposes. The same can be said of many of the more common drugs we all know of: opium, coca and cannabis have many traditional uses stretching back for thousands of years. The problems arise when they are taken out of their cultural contexts and used for purely recreational purposes. This is often combined with plant breeding to increase the content of their psychoactive ingredients or chemical treatments that enhance their potency or synthesise their active components (such as in the production of heroin). A relatively benign drug is transformed from its original form into something much more problematic.

The latest plant to emerge as a potential drug in Europe, though already banned in many places, is Mitragyna speciosa, more commonly known as kratom, and native to several countries in Southeast Asia. Here in Ireland its active ingredient has been designated a Schedule 1 drug** since 2017. Pre-emptive legislation in other countries is quickly catching up.

I will confess to not having heard of kratom before it became a Western health concern. This is probably true of most people outside Asia, but more surprising to me given a long-standing interest in ethnobotany in Southeast Asia and having lived in Malaysia. I had previously subscribed to the view expressed in most textbooks that natural painkillers were absent from the regional flora, an opinion confirmed through my own discussions with orang asal shamen***. This may be because kratom grows in drier areas than I’ve worked in; I’ve certainly never come across it in any surveys of traditional agricultural systems in Malaysia. Pain relief is one of the scarcest and most sought-after forms of medicine, so if kratom is effective then I’m now puzzled that it isn’t more widespread.

In its normal range kratom is used as a mild painkiller, stimulant and appetite suppressant in the same way as coca leaves have been used for thousands of years in South America. The dosage obtained from chewing leaves, or even from extracts prepared by traditional methods, is likely to be low. This is very different from its recent use in Western countries where higher dosages and combinations with other drugs (including alcohol and caffeine) are likely to both enhance its effects and increase its toxicity. It is also more commonly sold over the internet as a powder.

Nevertheless, a parallel increase in recreational use within its native range has also been reported, although as of 2016 with no deaths associated. Recreational use also has a long history in Thailand, and habitual use in Malaysia has been known of since at least 1836. It is now thought to be the most popular illegal drug in south Thailand, though arguments continue over whether it ought to be regulated, despite the acknowledged risk of addiction. In this it mirrors the discussion over khat, a botanical stimulant widely used across Arabia and East Africa****. Where cultural associations are long-standing it is seen as less threatening than drugs from overseas.

Along with its long use as a drug in its own right, kratom has also been used as a substitute for opium (when unavailable) or treatment for addiction. It also has a folk usage in treating hypertension. This points towards the potential for beneficial medical uses which would be delayed or unrealised if knee-jerk regulation prevents the necessary research from being conducted. Compare the situation with cannabis: its medical use in China goes back thousands of years, but the stigma associated with its use in the West (which we can partly blame on Marco Polo) delayed its decriminalisation. Cannabis remains illegal in many countries despite steadily accumulating evidence of its medical value.

I’m a firm believer that we should recognise, respect and learn from the botanical knowledge of other cultures. Banning traditional medicines (and even socially important recreational drugs) in their home countries on the basis of their abuse by people in the Global North is morally wrong. Excessive regulation also deprives us of many of the potential benefits which might come from a better understanding of these plants.

All this is not to diminish the real physical and mental damage caused by addiction, and the need to protect people from abuse and the associated social costs. But the urge to get high is nothing new, and the cultures that have formed alongside psychoactive plant chemicals, from morphine to mescaline, usually incorporated ways of controlling their use and appreciating them in safe moderation. In Tibetan regions where I’ve worked cannabis is a frequent garden plant, and teenagers enjoy a quiet rebellious smoke with little stigma attached, while adults eventually grow out of the phase. Our own immature explorations of ethnobotany need to learn that there’s much more to a plant than the titillation provided by its active ingredients.

 


 

UPDATE: after posting, @liana_chua pointed out this fascinating blog post by @PaulThung about the market-driven boom in kratom production in West Kalimantan, where it is known as puri. It’s a great reminder that what’s seen as a growing problem in the Global North can simultaneously be a development opportunity for people in deprived regions.

 


 

* I dislike the term ‘war on drugs’ because I don’t like the tendency to militarise the vocabulary surrounding complex and nuanced issues (see also ‘terror’). Also, as others have pointed out, war is dangerous enough without being on drugs as well.

** Schedule 1 covers drugs which have no accepted medicinal or scientific value. The classification in Ireland is therefore based on uses rather than the potential for harm or addiction. This can become a bit circular. For example, nicotine isn’t on the list, but its only medical usage is in treating nicotine addiction. Cannabis, however, is on the list.

*** Of course I hadn’t considered before now that this is something they might not want to talk about, given that kratom is regulated in Malaysia, where it is known as ketum,  although attempts to ban it outright have not yet come to fruition. Still I would have expected to spot a large shrub in the Rubiaceae if they were growing it deliberately.

**** I’ve often chewed khat in countries where its use is tolerated, and it’s a great way to boost your mood and energy level during fieldwork. It is also addictive, of course.

 

Keep your enemies close to save the planet

torontowolfpack

This year Toronto Wolfpack join rugby’s Super League. But at what cost to the climate? (Rick Madonik/Toronto Star via Getty Images)

I’m a rugby fan. In the past I played for several amateur teams as I moved between cities for work. In all honesty I was never a particularly good player, but being keen and available is usually enough to secure a place in the squad and a guaranteed game in the reserves. It’s been over ten years since I picked up a ball though. Rugby is definitely a game for the young*. These days I meet my craving for rugby vicariously through supporting my local rugby union club and the rugby league side from where I grew up.

Over the last 30 years or so the game has changed immensely, and mostly for the better. Rugby union turned professional in 1995 while rugby league had begun to do so a century earlier (this was the original reason for the split between the two codes). Overall this has meant an increased profile, a more enjoyable experience for the fans, better protection for players and most of all an improved quality of the game.

Another trend began in the last few years though, which is that the sports became international at club level. In 2006 the rugby union Super League ceased to be a UK-only affair with the arrival of French side Catalans Dragons. In the 2020 season it has been joined by Toronto Wolfpack from Canada. A league previously confined to a narrow corridor of northern England has suddenly become transatlantic. Meanwhile in Ireland my local rugby union side Munster now play in the elite PRO14 league. Originally the Celtic League and composed of sides from Ireland, Scotland and Wales, these days it takes in two clubs each from Italy and South Africa. In the southern hemisphere Super Rugby unites teams from Argentina to Japan. Hardly next-door neighbours.

Why should this matter? Surely it’s great to see some of the world’s best teams pitted against one another?

Watching the start of the rugby league season made me a little anxious. Toronto, being unable to play in Canada at this time of year (it’s still too cold), began their season with a heavy loss against Yorkshire side Castleford Tigers. Many of their fans were Canadians based in the UK and delighted to have a team from back home to support. But there was also a hard core of travelling fans. And what happens when UK sides start playing games in Canada: how many air miles will be racked up? Few fans can afford to do so, and certainly not often, but there was no reason to even consider it before now.

This internationalisation of sport at the club level is driving a new trend in long-haul leisure travel, whether it’s American Football games in London or South African rugby teams playing in Ireland. Entire teams, their support staff and a dedicated fan base are now making regular trips around the world for one-off fixtures.** These will inevitably involve more flights than would otherwise have occurred. At least if you’re playing teams in your own country you can usually take the train.

This has of course been going on with international-level sports for a long time, but there the frequency is much lower. Major tournaments only occur every few years and visiting national fans usually attend a couple of games, allowing for an economy of scale. Even in this arena there’s been an increase in travel; one of the reasons for the formation of multinational touring sides like the Lions was to spread the cost of long-distance trips among national associations. Now every country travels independently. Still, although the audiences for international fixtures are huge, most fans watch at home on the TV. Club fans usually don’t have that option.

In the face of a climate crisis all sectors of society have a part to play. Unnecessarily increasing travel emissions by creating trans-national sporting contests among teams with very local fan-bases strikes me as an unwelcome new direction. Commercialisation of the game has had many benefits but this is not one of them. Despite flygskam, growth of air travel has continued unabated, and international sporting fixtures are a contributing factor.

The most enjoyable, passionate and intense games are those between local rivals. Watching Munster face up to Leinster, or Warrington against St Helens**, is to me the pinnacle of each sport at club level. Best of all, we can watch them practically on our doorsteps. Keeping our sporting fixtures local is one small way in which we can reduce our impact on the planet. Sustainable rugby? Why not.

 


 

* Anyone who plays rugby past the age of 35 is either brilliant, crazy or no longer interested in the finer tactical points of the game.

** The chant of disgruntled fans suspecting the referee of bias will have to be “Did you bring him on the plane?”.

*** Especially when we win. Oh look, we did it again.

Climate change and the Watchmen hypothesis

The climax of Alan Moore’s famous graphic novel (warning: spoilers*) plays out around a moral dilemma. In a world of conflict and discord, maybe the only thing that can bring humanity together is a shared enemy. If you accept that proposition, then could it ever be morally defensible to create such an enemy? And if you discovered that the enemy was a sham then would it better to reveal the truth or join the conspiracy? Part of the reason the conclusion to the book is so chilling is that your heart wants to side with the uncompromising truth-seekers while your head makes rational calculations that lead to unpalatable conclusions.

watchmenpage

Selected panels from p.19 of WATCHMEN 12 (1987) by Alan Moore and Dave Gibbons, published by DC Comics.

Why am I invoking an old comic? In climate change we face a common enemy which is undeniably real, whose approach can be measured, predicted and increasingly experienced in real time, and which has been created entirely by ourselves. It may not have the same dramatic impact as a surprise alien invasion; think of it more like spotting the aliens while their ships are still some distance from reaching Earth, but they’re already sending trash-talk transmissions in which they detail exactly how they’re going to render the planet uninhabitable to humans and frying a few interstellar objects just to prove the point. And we invited them, knowing exactly what would happen.

In this case there is no conspiracy. The stream of scientific studies demonstrating the perils of allowing global temperatures to increase further is so continuous and consistent as to be almost background**. We want to stick our heads in the sand and ignore climate change because we enjoy our short-haul flights for city breaks, private cars to drive to work and occasional steaks. The individual incentives are all aligned towards apathy, ongoing consumption and deferred responsibility. Whatever is the worst that could happen, many of us in the Global North will be dead of natural causes by the time it reaches our parts of the world.

In the face of such an obvious existential threat, about which we have been forewarned by the consistent voice of the overwhelming majority of scientists, how is humanity preparing? Are we coming together as one? Have we overcome our differences and channeled our collective intellects and resources into finding a solution for all?

Like hell. America withdraws from the only international agreement with a shred of common purpose; Australia continues to mine coal while the country burns; Poland hosts a gathering of climate scientists and uses it to defend the coal industry. I know people who have stopped attending UNFCCC meetings because the emotional toll is so great. To recognise how much needs to be done and to witness how little has been achieved is a terrible burden. This is not to say that the situation is hopeless; with concerted action we can still avert the worst outcomes, and doing so remains worthwhile.

With all this in mind, I’m forced to conclude that the evidence in support of the Watchmen hypothesis is lacking. Creating a common enemy will not be enough to bring the world together. We’ve been trying it for 30 years already***.

Where does this leave the likes of Extinction Rebellion? Over the last year I’ve been amazed by the scale of the protests in London, Berlin and cities around the world, which exceed every previous effort. It feels like it a tipping point, and it ought to be, because one is long overdue. Whether it proves to be the moment the tide turns, time will tell. It has all the elements of a success story: popular support for the message, if not always the methods; an inspiring figurehead in Greta Thunberg who continues to exceed expectations; politicians scrambling to be seen on their side. Yet the background to this is the ongoing prosecution of many of the participants as states quietly assert their control. And the usual pattern of politics is for green issues to slip down the agenda as soon as an election looms****.

One of the side-effects of XR is that the disaster narrative has currently obscured other discourses and even subjected them to friendly fire. But this is not a battle which will be won on a single front. Many alternatives are available, including market mechanisms, commercial partnerships or a rebalancing of economic goals (there are good reasons why the Green New Deal isn’t quite it, but I admire its objectives). These are not exclusive of one another, nor likely to be sufficient on their own, but if we are to succeed in inspiring change then a mixture of approaches and messages will be essential.

I’m not saying that we should stop heralding the impending catclysm*****. The uncompromising truth-speakers are right. We need to keep up the drumbeat of evidence, narratives, reporting and campaigning as the climate crisis unfolds. There are positive, individual steps that we can all take. But if we hold out for the moment when humanity suddenly unites to act as one then I fear it may never come.

 


* It’s been out since 1987 so you really have had plenty of time to read it, but if you haven’t then perhaps you should. And no, watching the film doesn’t count. Trigger warning: contains scenes of sexual violence.

** Even in such times, this paper stands out as particularly terrifying.

*** The first IPCC report was published in 1990. The fundamental message hasn’t altered since, even if the weight of evidence and urgency of action have increased. At least half of global carbon emissions have occurred since this report.

**** A few years ago I attended a talk by a statistician from one of the major political research agencies in the UK. He showed polling data with a consistent pattern that voters placed a high priority on green issues between elections but they fall down the ranking in advance of an election. Politicians know this, which is one reason why action in democratic countries is so slow.

***** If we’re going to stick with the comic book metaphors, this makes climate scientists the Silver Surfer to climate change’s Galactus. Too geeky?

Moving jobs as a mid-career academic

bindle

Why would anyone leave a permanent academic position at a research-intensive university?* After all, for many (if not most) PhD students, post-doc researchers and temporary lecturers, this is the ultimate dream. Openings for permanent posts don’t arise very often and competition for them is fierce. Once you’re ensconced in your own office with your name on the door then to most observers outside the ivory tower you’re living the dream.

And yet academics do move. Although it happens relatively infrequently in the career of a given individual, at least once they become permanent members of faculty, at any time all departments have a turnover of staff departing and (usually) being replaced. When this moves above a trickle it indicates problems, but there remains a background rate, even if on the surface everything is going well.

Having completed such a move just over a year ago, the rest of this post explains my thinking in doing so. I won’t mention who my former employer was, not that it’s hard to find out. That’s simply because I don’t want this post to even carry the suggestion of hard feelings or criticism of any individual or institution. But first: a story.

Two years ago I completed a job application on a flight home from the US. The flight was delayed and the deadline was the same day, which meant that on arrival at my parents’ house I rushed through the door and submitted online with minutes to spare. Recovering from the jetlag or even showering had to wait. A few weeks later I received notification that I had been shortlisted, then not long afterwards found myself back at the airport flying over to Ireland for an interview.

This had only been the second job application I had made that academic year and the first response.** That I only made a handful of applications was in part through being selective but also because mid-career positions don’t come up very often. There are often places at the bottom of the ladder for junior (tenure-track) lecturers, though nowhere near enough to meet demand, but by the time you’ve been in the business for over a decade, your skills and experience are so specialised that you either need to be lucky enough to find a opening for someone exactly like you or a call so broad that you can engineer your CV to fit. I also wasn’t going to risk moving for anything other than a permanent position.

Given all this, I went to the interview with the intention of treating it as practice and continued applying elsewhere. It’s always worth having several lines in the water, even if you don’t end up needing them. I wasn’t desperate for a job because I was in the fortunate position of already having that security. Maybe this relaxed, open-minded approach helped, because I got an offer.

There’s a slightly embarrassing element to the next part. When the phone call first came through to offer me the position I hung up. At that precise moment there was a tearful post-grad in my office who had come to see me for help. I will always put supporting a student in distress ahead of any phone call, however important. Luckily UCC weren’t offended by my rudeness and called back later.

To end the story, here I am. There are lots of great reasons for being in Ireland right now, and specifically at UCC. These include a growing focus on my field, national investment in forestry and agroforestry, and a booming higher education sector. The reasons for leaving UK Higher Education would surprise no-one.***

Why though did I leave a permanent academic position at a global top-100 university with international recognition? Several junior colleagues were aghast at what looked like folly. I had invested 13 years in the institution, built up a research group, developed teaching materials that were tried-and-tested, and no-one was trying to get rid of me. On the contrary, at the same time as I was trying to leave, they gave me an award and a performance bonus. I loved my colleagues in ecology and evolution; they’re a wonderful group and remain friends. The opening to replace me attracted a host of well-qualified applicants and they had no difficulty recruiting someone brilliant.

Why then did I leave? More generally, why would anyone disrupt their stable work and family life to move mid-career? These are my reasons, which may not translate to everyone’s circumstances, but perhaps might help clarify my thinking for anyone in a similar situation.

  1. I had gone as far as possible in the context of my existing position. After 13 years without a sabbatical the lack of respite from accumulated responsibilities left no space to reflect or develop. The backlogged manuscripts weren’t getting written; new projects were grounded; every year the expectations rolled over and incrementally increased. The thought of spending another year (or more) doing the same thing in the same place filled me with existential dread. Had I felt as though an alternative was within reach then I would have stayed. There’s no complaint implied here; the job had just became one that didn’t fit me any more.
  2. This was a quiet period with several major projects recently completed. Although I had four PhD students on the books (all with co-supervisors), actually the group was at a relatively low ebb, and nothing new was on the horizon. This was partly deliberate; having made the decision to go, I didn’t want to leave too many people in the lurch.
  3. It was time for a new challenge. When I returned to the UK from Malaysia in 2002 I had no intention of staying for long. That it took 16 years for me to leave again was simply because the opportunities lined up that way. Life had become comfortable but also a bit boring.
  4. I wanted to shake up my perspective. After over a decade working in the same place you know your colleagues well and if collaborations haven’t sparked then there’s little chance that they will. Working with new people is the best way to expose yourself to new ideas. This either means moving yourself or hoping that fresh recruits will restore energy in the place you’re already based. It had been a very long time since the latter had happened (after 13 years I was still the youngest permanent member of staff in the building) so I left instead.
  5. We were starting a family, which prompted reflection on my approach to work-life balance. Long hours, working evenings and weekends throughout the semester, were not compatible with the life I wanted or the parent I hoped to be. Nor was I going to be taking extended trips overseas to visit field sites and collaborators. The fieldwork had been one of the compensations of my old job; if that was being scaled back then I wanted the possibility of stronger research interests at home.

I can’t say just yet whether the move has been successful, and at any rate there’s no way to know for sure without a controlled comparison of some partial metric. But what I can say is that I’m enthusiastic about science again, enjoy coming into work every morning, and optimistic about getting some projects I care about off the ground. On that basis alone it’s been worth it. In fact, the department will be recruiting more people very soon — if you want to join us then keep your eyes open for forthcoming positions!

 


* For ‘permanent’ you can read ‘tenured’ if you like, but the truth is that tenure doesn’t mean quite the same thing outside North America. Universities generally can’t fire us for no reason but the level of protection isn’t equivalent. For ‘research-intensive’ you can read R1 in the USA, or Russell Group in the UK, or whatever your local class of prestige universities is.

** I’m not telling you how many failed applications had gone in over the preceding few years, but there were plenty. These had however been rather speculative; what changed was that I put serious effort into developing much stronger applications.

*** Brexit, HE funding issues, Brexit, low pay, Brexit, workload, Brexit, managerialism… did I mention Brexit?