Why are there no African ecologists in Wytham Woods?

wythamwoods

Are Wytham Woods really that special? Photo from https://www.flickr.com/photos/oxox/509238022

Calls have been growing across many academic fields for the necessity of decolonising science, both as a means of addressing the legacy of imperialism and broadening the scope and inclusivity of the collective human endeavour. Yet these discussions have been slow to take root in ecology. Many ecologists complacently assume that they are above this — we work with international collaborators in countries around the world, often living and working alongside local scientists with whom we interact with as equals (or at least that’s what we tell ourselves). Can we shrug off this latest movement as the well-meaning but unnecessary moaning of pious social scientists?

No. And this ought to be glaringly obvious.

Those of us who work in the tropics are all familiar with the classic study sites: Barro Colorado Island in Panama, La Selva in Costa Rica, Danum Valley in Malaysia… the list could go on. What unites all these sites is that they were usually established, and are often still run, by white scientists from First World countries (mainly Europe, North America and Australia). They are visited by streams of researchers and students from these First World countries. While the field centres are staffed by locals and include local scientists in their research programs, the funding to support them comes primarily from overseas.

Many ecology programs in First World universities include a glamorous field trip to an exotic location where our students can learn about applied conservation. They travel to Africa, South America or Southeast Asia and spend a few weeks visiting field sites which have been made famous through the published work of mainly white First World scientists, sometimes their own instructors. Often these field courses employ local teachers and guides, but they are based at local institutions. Locals are there to support the visiting experts, not to be celebrated for their own local research programs.

We have our classic field sites in the First World too. But how many visitors from the developing world come to see them? Why are there no Africans in Wytham Woods, studying the dynamics of a temperate woodland? Where are the Brazilians in Hubbard Brook? How many Indonesian scientists make a pilgrimage to see the Daintree Forest in Australia?*

There is one obvious reason why developing world scientists don’t visit these sites for research: money. Yet this is not an insuperable barrier. If we genuinely cared about developing international science, and believed that our cherished major study sites were of international importance, then we could find a way. In the same way as a British forester could develop sufficient expertise to interpret a study site in Africa, surely a Ugandan forester would be able to shed some light on what’s happening in a forest in Oxfordshire. Has anyone thought to ask?

More important is that they probably don’t care. Our favoured locations are much less interesting than we would like to believe, and have little to say to scientists in other countries. The one-way flow of assumed expertise and insight is a glaring failure in the way our entire field operates. In short, we need to decolonise ecology.

In a new paper in Biotropica we draw attention to this problem and suggest three responsibilities that researchers from the developed world need to accept as part of a moral imperative to decolonise our field.**

The first is a recognition of objectivity; ecologists from the Global North bring a set of priorities, paradigms and assumptions that are not always shared by the people living in the countries in which we work. The solution is not to indoctrinate the locals in our way of thinking, but to learn what their own perspectives are, and fully incorporate them in our research programs.

Secondly, we can stop calling our field sites ‘remote’ just because they require a long plane flight to reach and are found in places without reliable running water. To many people they are simply ‘home’. We should recognise and respect their expertise, even though it is exhibited in different ways from our own. If we genuinely wish to support local people then we should seek to arrive as supporting collaborators in achieving their goals, not solely ours. That would be truly impactful research.

Finally, we should start reflecting on our own background and how it inflects our conduct as researchers. Positionality statements are a common starting point in the humanities literature but remain very rare in science. This isn’t just a tick-box exercise for which we need to find an appropriately contrite form of words before carrying on as before. We need to acknowledge that the neutral scientific voice is a myth, one which disguises our own agency while writing out the contributions of others, particularly the locals we rely upon. We need to reflect on and state our potential biases in the same way as we would expect a declaration of conflict of interest or funding sources.

None of these prescriptions are inherently difficult, it’s just that the structures of modern science do not currently provide incentives for achieving them. But we created the structures of science. It’s our responsibility to change them.

 


 

This is an updated version of an article I wrote for a newsletter a few years ago. My thinking has been greatly refined through discussions with Kate Baker and Mark Griffiths, my coauthors on our paper in Biotropica.

* To anyone reading and planning a comment saying “I took some African ecologists to Wytham, look, here’s a photo”, please stop and think about whether that either invalidates or reinforces my argument.

** In case you’re wondering how three white Europeans feel that they have a right to weigh in on this, then another blog post on this will follow, but briefly: in cases of inequality, it’s the responsibility of those in a position of privilege to take action, not to wait for someone with less of a platform to tell them that they need to.

 

Advertisements

Of otters and scurvygrass

20190223_153551

A dense patch of scurvygrass (Cochlearia officinalis agg.) in an area much frequented by otters based on the clear trails, spraint and crushed mollusc shells.

On a sunny February weekend I made a trip out to Bere Island (Oiléan Béarra), a short ferry-ride off the southwestern tip of Ireland, to investigate a peculiar botanical phenomenon. An old friend had called on me recently and enquired what the connection was between otters and scurvygrass (Cochlearia officinalis L.*). I confessed to knowing nothing about it and was slightly sceptical, but he was adamant that there was such an association, and wanted to show me in person. Any excuse for a field trip is welcomed so off I went.

Once there on the ground there’s no doubt about it. The otter haul-outs are very obvious, and invariably marked with scurvygrass. The plant does grow elsewhere, unsurprisingly, but never in such densities and with such luxuriant foliage as where they occur alongside otter signs. Whatever the otters are doing, the scurvygrass is enjoying it. A representative sample of rocky outcrops suggests that if you spot a patch of scurvygrass it’s clear indication that once you get up close there will be matching evidence of otters. Neighbouring outcrops lacking the plant are also devoid of otter signs.

It’s not only at the shoreline where otters leave the sea, and where their spraint or the remnants of smashed shells indicates favoured spots to hang out. Their tracks continue inland. Often after coming onshore the track leads directly to a freshwater pool, suggesting that they like to wash the salt water off their fur, then out the other side. They then wend their way through the grass and heather, apparently choosing to cross the island on foot rather than swim their way round.

This is where it gets even more interesting: in the midst of these fields, scurvygrass is only found on the otter trails. Elsewhere the sward is higher and there is no sign of the plant. Otters are clearly carrying scurvygrass inland and encouraging it to grow in places where it otherwise would not. This effect is only seen in fields which don’t contain livestock; cows and sheep have a tendency to share the same trails with otters and perhaps browse or beat down the scurvygrass. But on the old military firing range, where otters have the land to themselves, every one of their trails is peppered with patches of the plant.

20190223_155617

An inland otter track through a field with a dense grass sward. Scurvygrass can only be found along this trail.

On returning to the office I consulted the usual books to see whether I could find any record of this particular association, and drew a blank. Web searches, whether in the scientific literature or the internet at large, have also come up with nothing. So what is going on? I have a few hypotheses:

  1. It’s a coincidence. Any ecologist has to keep the null hypothesis in the back of their minds. Maybe this is pure chance, or else some unknown independent environmental factor dictates the combined presence of otters and scurvygrass. I haven’t done a randomised sampling design to demonstrate a statistical association, but often the evidence of your eyes is enough to tell you that there’s no need.
  2. Otter disturbance. Scurvygrass thrives in patches with frequent disturbance, and the constant to-and-fro of otters might open up denser vegetation in such a way that they allow it to enter. Perhaps scurvygrass is more tolerant of this kind of disturbance than other plants.
  3. Dispersal by otters. The seeds of scurvygrass don’t look like they are adapted for sticking to the sides of animals, which would be one mechanism. Do otters eat scurvygrass, and thereby carry the seeds with them, defacating them in freshly-disturbed areas that aid its germination? This is currently my favoured explanation, but I don’t know enough about the diet of otters to be sure.
  4. Saline environments. Perhaps all the salt water clinging to the fur of otters changes the soil at their haul-outs and along their trails, favouring the growth of a halophyte such as scurvygrass. This is possible, but not entirely plausible given the presence of patches quite a way inland and even after they’ve taken a freshwater bath. Soil samples might resolve this. There are also no other halophytes which show the same pattern.
  5. Otters go where the scurvygrass is. Maybe they like the feel of it on their paws? It’s unlikely to be that they’re feeding on it, otherwise you would expect to see less scurvygrass in the places they use most frequently, while the opposite appears to be true.

Have I missed anything? Is this a known phenomenon that my friend has independently discovered? I’d be interested to hear from anyone who knows more about this in the comments. For now it’s only a small mystery, but also an intriguing natural history observation, and a reminder that people who walk outdoors and watch the world around them carefully often spot patterns that professional ecologists in their offices would never find.

20190223_161335

Credit to Bere Island resident Maurice Neligan for spotting this interesting pattern.


* This is as close as I can get to an exact species identification. Stace notes (in the 3rd edition; I don’t have the latest one just yet) that C. officinalis is highly variable, an aggregate of several likely species, and also freely hybridises with C. anglica and C. danica, especially in Ireland.

UPDATES!

As ever, Twitter provides a wealth of insights from other botanists. Here are the pick of the suggestions:

The first can be summarised as ‘disturbance + fertiliser’, the second with the twist of extra salinity, which might not be important given observations elsewhere. But scurvygrass isn’t only found where spraint occurs. It’s also along trails, some of which are quite steep, and any spraint is likely to roll or blow away pretty quickly. Otters may be marking trails with urine which could have a similar effect. Scurvygrass is also found right at the point at which otters emerge from the sea. I can’t help thinking that there must be a dispersal element to the story as well. This seems to be supported by another sighting of scurvygrass associated with birds:

There’s only one thing for it — we need to do some experiments! This may not be the most important project on my list but it makes for an enjoyable distraction.

Why should anyone care about Ugandan lianas?

Liana_group

The liana team surveying in 2015 (Takuji Usui, Julian Baur and first author Telma Laurentino). Bridget Ogolowa (far left) did not participate in the study. Photo by Line Holm Andersen.

Habent sua fata libelli as the Latin epithet puts it, meaning ‘little books also have their destinies’. I’d like to think that the same is true of papers. Not every scientific publication appears in a major journal, or attracts media attention, or becomes a highly-cited classic. Some, perhaps, are never read again by anyone. This doesn’t mean that publishing them wasn’t valuable. A paper represents a new piece of knowledge or insight that adds to our total understanding of the world. And in some cases its small part in the greater whole is the main reason why it matters.

As an example, our latest paper just came out in African Journal of Ecology, a minor regional journal with an impact factor so small (0.797 in 2017) that in the metric-obsessed world of Higher Education it barely registers. Some would argue that the effort of publishing in such a low-status journal is a waste of time*. Why bother?

In this case, our study — small and limited in scope as it was — adds an important point on the map. Over recent years it has been noted that the abundance of lianas is increasing in South American forests. This process, sometimes known as ‘lianification’, is troubling because lianas can impede the growth of forest trees, or the recovery of forests following disturbance (including logging). At a time when we need forests to capture carbon from the atmosphere, an increase in the abundance of lianas could be exactly what we don’t want.

The causes of this increase in lianas are unknown, and it is also uncertain how widespread the effect might be. The best evidence that it’s happening comes from neotropical forests**, but we can’t be sure whether the same process is occurring in Southeast Asia, or Sri Lanka, or Africa. If the driver is global one, for example a change in the climate (warming, higher carbon dioxide concentrations, or longer dry seasons) then we would expect the same trend to be occurring everywhere. If it’s a purely local effect within South America then it might reflect historical factors, modern disturbance or the particular composition of plant communities.

It’s not just that we don’t know whether lianas are increasing in all parts of the world simultaneously; for most forests we don’t even know how many lianas were there in the first place. We could only find evidence of four published studies of liana abundance in the entirety of Africa, of which two were in secondary or transitional forests. That means only two previous studies on the continent had measured lianas in a primary forest. If we want to monitor change then we first need a starting point.

aje12584-fig-0001-m

Location of our study in in Kanyawara, Kibale National Park, Uganda. Figure 1 in Laurentino et al. (2018).

What did we find? Actually it turns out that liana densities in our forest were quite similar to those seen elsewhere in the world. An average liana basal area of 1.21 m2/ha is well within the range observed in other forests, as are the colonisation rates, with 24% of saplings and 57% of trees having at least one liana growing on them. These figures are unexceptional.

What does this tell us about lianification? To be completely honest, nothing. Or at least not yet. A single survey can’t say anything about whether the abundance of lianas in Africa is increasing, decreasing, or not changing at all. The point is that we now have baseline data from a part of the world where no-one had looked before. On their own these data aren’t particularly interesting. But considering the global context, and the potential for future studies to compare their work with ours, means that we have placed one more small piece in the jigsaw. And for the most part, that’s what science is about.

 

CODA: There’s another story behind this paper, because it came about through the awesome work of the Tropical Biology Association, an educational charity whose aims are capacity-building for ecologists in Africa and exposing ecologists from species-poor northern countries to the diversity and particular challenges of the tropics. Basically they’re fantastic, and I can’t recommend their courses highly enough. The work published here is based on a group project from the 2015 field course in Uganda and represents the first paper by three brilliant post-graduate students, Telma Laurentino, Julian Baur and Takuji Usui, who did all the real work***. That alone justifies publishing it, and I hope it’s only the first output of their scientific careers.


 

* A colleague at a former employer once memorably stated in a staff meeting that any journal with an IF of less than 8 was ‘detritus’. This excluded all but a handful of the most prestigious journals in ecology but was conveniently mid-ranking in his own field.

** Although this might be confounded by other factors — look out for a paper on this hopefully some time in 2019.

*** I also blogged about the liana study at the time here.

Reviewing my reviewing hours

Imacon Color Scanner

Review wherever and whenever you can. Astronaut Shannon Lucid reading on the Mir Space Station in 1995. I have no evidence that she was reviewing, but it’s possible. Source: NASA.

The life of an academic involves an awful lot of reviewing, such that the constant stream of critical judgements can exact a mental toll. Quite apart from how we feel about this part of our work, another issue is how do we fit it all in? We are frequently expected to review manuscripts, post-graduate theses, grant proposals, job and promotion applications, and often to tight deadlines. This is in addition to the normal assessment cycle of undergraduate degree students that forms the core of our business.

Having recently started a new job and a family, I’ve been trying to set a few ground rules for how I would like to restructure my routine. Part of this involves developing a better work-life balance. This is necessary given that for the last few years the work-work balance has been hard enough. I am aiming to be on campus for no more than 40 hours a week, Monday to Friday only. That this is framed as an ambitious target is itself an indicator of how unhealthy my relationship to work has been in the past.

For the first month it’s gone well, but that’s due in part to the light workload I’ve been granted during my first semester. Gradually, however, I’ve noticed a pile building up in the corner of the office: all my reviewing. This week I had to take a day off sick and the first thought that crossed my mind was “Oh good, I can catch up on the reviews”. Is my mindset unhealthy as well?

Over the last 15 years I’ve got used to the idea that reviewing is something that happens outside work, preferably at home in a comfortable chair in front of the stereo. This means it happens almost exclusively at evenings or weekends. I have generally found it to be a relaxing activity for this reason, and that trying to review while in work only makes me agitated about all the other jobs I ought to be doing with that time. When I draw up my list of jobs for the day, the reviewing gets deferred until later.

Am I unusual in my working practices? There’s only one way to find out: ask Twitter!

I can understand why reviewing during normal working hours should be seen as the default. It is part of our role and the expectations of an academic life, and therefore counts as our day-to-day activity. If it’s work, do it at work. But based on an admittedly small sample size, a majority of fellow academics don’t agree, or at least don’t follow this argument through in practice.

Why might we not review during office hours? For the most part we volunteer to take on the reviews, and our employers rarely if ever pay us for it. From their perspective it’s something to tolerate rather than encourage given that we could be writing another paper, working on a grant proposal, teaching some students, or dealing with the endless flow of administrative chores. Every time we say yes to reviewing a paper it costs them some of our time and therefore their money. Yes, this is a cynical view of higher education, but I worked in UKHE for a long time so you should expect that.

When we are paid to review — mainly for some grant-awarding bodies, or as external examiners — it’s not our employers who foot the bill. This is additional income and therefore feels to as though we should be doing it in our own time. Once again we have chosen to take on this additional work, and while we probably don’t do it for the money (which barely compensates), the presence of a cash incentive does change our perception*.

I put down the final option of travelling because I know a number of people who deliberately use what would otherwise be ‘dead time’ to catch up on reviewing tasks. If you have a long commute on public transport, or spend lots of time in airports or hotels, this is the kind of work that can be done anywhere. I suppose then the question is what else you might do in that time, such as reading for pure pleasure or catching up on sleep. Is reviewing while travelling a way to continue working during normal hours, or an extension into what ought to be a time to relax? If we don’t take work home then does that imply that any time outside the home can be used for work?

How am I going to resolve this? Well, today I’m working at home, and getting some reviews out of the way in the process. I’d be interested to hear how others fit their reviewing commitments into a normal working routine.


 

* The Swiss national funding agency sends its reviewers boxes of chocolates by way of thanks, which is a nice touch, and has no bearing on the quality of my reviews because I don’t much care for chocolate.

Diary of an academic mid-life crisis

RIMG3256

I won’t be needing these any more.

In a few weeks I’ll be moving to a new position at University College Cork in Ireland, in the wonderfully named School of BEES. This entails moving home, family and job to another country. Combined with the recent birth of our son, it has been a summer of upheaval, reflection and transformation. It’s a chance to reconsider who I am and what I aspire to do with the remainder of my career. In short, I’m having an academic mid-life crisis.

After 13 years at Nottingham, occupying the same office the entire time, I’ve accumulated a lot of detritus. The last few weeks have been long process of throwing away many things that I don’t need (reprints, project reports, old posters), giving away others (books, consumables) and whittling down the remainder until I only retain what I think I’ll need. After carefully preparing a list of my career objectives, the criterion for whether I pack something into a box is whether it will directly help me to achieve those goals.

This seems like an obvious strategy, almost trite. But the urge to retain only what I need comes into conflict with the things I might keep ‘just in case’ or because they have some sentimental value. For example, should I keep hard copies of the PhD theses of former students? I have them in electronic form, and the only reason they’re on the shelf is for display. I literally don’t need them.* But looking at them fills me with fond memories, so into the box they go.

The other and most difficult set of decisions surrounds datasheets and specimens. Some of these have been in my possession for 20 years or more. Simply being aware of them raises pangs of guilt that they have never given rise to the publications that would reveal them to the world, not that the world would show a great deal of interest either way.** My rule has been that if I haven’t touched them, have no intention of ever using them, and can foresee no future role for them, then they go in the skip.

The idea of disposing of data or specimens provokes a visceral feeling of dismay among scientists. All that work, the long hours in the field, the grant money spent in obtaining them, and it’s come to nothing. No wonder that people have tried to stop me:

Scanning is a nice idea but there are simply too many. I would be spending days that I don’t have stood in front of the scanner, collecting files that will then sit instead on a hard drive and be ignored for another decade or two. Time is my most precious resource and using up more of it on lost causes is Concorde fallacy.

Even in the most optimistic scenario, if I kept the files, what might happen? I would spend still more time entering and analysing the data, then writing and submitting a manuscript, which even if published would simply add to the growing mountain of mediocre and unimportant science that lies unread, uncited and uncared for. It’s not only my time but that of several editors and reviewers. With the best will in the world, these data probably don’t deserve to be published. There are bigger and better things I want to do.

There is an even more powerful case for throwing these old projects away, and it comes down to my mental health. A new job is an opportunity to make a fresh start, to redefine myself. It’s a chance to shed some baggage that I acquired during my PhD, added to over the course of a few post-docs, then shoved in the corner of my office and ignored.

In the same way as there are positive reasons for retaining possessions, such as the warm glow an old PhD thesis gives, there are also negative ones. Data can go bad, and after a while it begins to influence your mental health. I don’t need the guilt to follow me any longer. It doesn’t need to cross the Irish Sea with me, and reminders of previous failures don’t need to take up space in my office or on my hard drive. Some projects are also bundled up with more personal recollections of the people or events they were associated with, and which make them even harder to return to. I’m not the person I was then; I’m not the scientist I was then either.

So away it goes. I can tell myself that I no longer care about the canopy heights which orang utan occupy as they go through rehabilitation; the frequency of leaf damage types in different rain forest environments; the distribution of sub-canopy scrub pine in Russian forests; succession in herbivore exclosure plots on the island of Lundy. It was all interesting to me once and, to be honest, it still is. It was all worth doing at the time and I don’t regret it. But enough is enough.

I’ve got some big ideas and several exciting projects that I can’t wait to start. I’ve made space for them now and I’m looking to the future. Wish me luck.

RIMG3242

For some reason I still had four pre-viva copies of my doctoral thesis. Four! One for me, one each for the examiners, and one spare… All now in the bin.


* The only time they come down is to show a current student the overall structure of a thesis. That’s a very limited task and one that could be accomplished by showing them almost any thesis.

** There is the remote possibility that someone might read the appendices of one of my minor papers and demand to see the physical evidence. This is a moral reason for retaining specimens but not, to my mind, a strong one. It happens so seldom in the career of any scientist (and never yet to me) that I doubt it will ever occur. And one day I will inevitably die, retire or leave science, at which point they will be lost regardless. Pretending that anyone will mourn the specific loss of my collections is just vanity.

Trumpets are meant for blowing

brassband

The Second Line Social Aid and Pleasure Society Brass Band perform at the Boston women’s march, as captured in an excellent article by Amelie Mason which shows exactly how a trumpet can be more than just a musical instrument.

A man walks up to a brass band, and asks one of the musicians whether he can buy her trumpet. Confused by the request, the musician replies that she wasn’t planning to sell the instrument, but could be persuaded for the right price. She asks why the man is so keen on buying her trumpet. Is he perhaps a musician himself? “Oh no,” the man responds. “I only want it for the brass.”

I’d like to use this analogy to think about the value of a university education. The story is adapted from Bertolt Brecht’s Messingkauf dialogues, a series of observations and parables on theatrical theory that he began in 1938 but never finished*. Brecht was making a point about the differing criteria of value that might be held by an artist and their audience.

Right now is a good time to have this conversation, just as undergraduate students are about to find out their exam results. Soon our graduates will be launched into the job market and have to sell their capabilities to potential employers. To employ a metaphor that Brecht didn’t intend, they will have to blow their own trumpets. This does however depend upon them still having trumpets and knowing how to use them.

Throughout his career, Brecht was obsessed with the idea of how theatre could be used as a means of instruction. Sometimes this was an explicit aim, for example in his Lehrstücke, or learning plays. Other times it was intended to be subliminal, distracting the audience while ensuring that their subconcious absorbed the intended message**.

The challenge was that audiences don’t go to the theatre to learn something. They are there to be entertained, to relax, to see what all the fuss in the newspapers is about, to associate themselves with a political faction, or as a signifier of their intellectual credentials. Over dinner or in the workplace they could then tell friends and colleagues “Oh yes, I went to see that Brecht play the other night,” and offer some personal observations.

Surely, you might think, the problem for an academic isn’t the same as for a playwright or our trumpeter. The audience have come to university to learn. We perform in some way, whether that’s through lecturing, tutorials or other pedagogical forms. While we try to make our lectures engaging and entertaining, the performative aspects are very much secondary. The message is the important element; what we want to say is what the students want to hear.

Except that it isn’t. In a university, teaching is always taking place. Students are there because, by and large, they want to learn the material and pass their exams. This is not always for the intrinsic value of knowledge, although having some passion for the discipline certainly helps. Rather they need evidence that they have moved some material. They absorb, recite, then obtain a reward for having done so. For a brief period they have been the bearers of information which can be returned and assessed.

This is of course a cynical viewpoint and not meant as an insult to the many committed, dedicated students who care deeply about the subjects they study. But the commodification of higher education encourages them to think as customers. Teaching is simply part of the compact: we deliver information, they demonstrate that it was received, we get paid.

And how much brass can you get for a degree? Helpfully, the Institute for Fiscal Studies have produced a report where you can find out exactly how much previous graduates have benefitted from sitting a particular subject at a given university. This is being circulated as a tool to help students make an informed decision on how best to spend the loans they receive in order to pay for their tuition. It gets worse though; the UK government is determined that this be used as a measure of value-for-money, and even as a stand-in for teaching quality. These are evaluations based on brass, not music.

We understand the sinking feeling of the trumpeter every time a student asks us what they need to know to pass the exam, how to get a first in our module, or whether the assigned reading is compulsory. We feel it when our students select modules based on the previous cohort’s grades, whether the lecturer is perceived as a ‘hard’ marker, or if the assessment is of their preferred type (exams or coursework). We see it when the conversation about supporting a student begins not with “I want to understand this subject more deeply” but “I need to get a 2i”***. I don’t blame them for taking this approach; they have been led to believe that this is the purpose of a university education.

When academics teach material, we do it not for the necessity of saying something (although lecturers, like musicians, still need to get paid). We want our audience to feel something, to respond to the narratives we weave, and to act accordingly. When we fail to move them to value the story behind the information, something has gone wrong: with our own abilities as teachers, with a system that encourages purely functional attitudes towards learning, with the willingness of the audience to see beyond the original reason they might have turned up.

A university education is more than just a certificate that can be leveraged to obtain a better salaried job. If that’s all a graduate does with their degree then they are in the same place as our fictional trumpet buyer. Perhaps that’s all they wanted all along, which is itself a shame. But that’s not what got me into doing this job. I’m here for the music.

 


* I have of course modified it for didactic reasons, but that’s surely just being a good Brecht disciple. The original is Dialoge aus dem Messingkauf, and Messingkauf can be translated as ‘buying brass’.

** To a modern audience these efforts can seem forced or inappropriate, but at a time when the arts were being deployed by fascists for political indoctrination it was essential that the left fought back with its own tools. In universities we’re not playing for the same stakes.

*** For non-UK readers, a 2i (or ‘two-one’) is an upper second-class degree. In most universities it represents an average mark of around 60%, and shows that the student has learnt enough to have a basic understanding of the subject. A number of graduate employers stipulate this as a minimum requirement. It’s roughly equivalent to a 3.0 GPA in the North American system.

It’s not easy being a tree

adult_bridge_dress_forest_girl_grass_heels_model-928773

I’m a beautiful tree! AAAGH GET THOSE CATERPILLARS OFF ME CCO Public Domain

Imagine you’re a tree. I’ve not been to a mindfulness class, but I’m aware that this is one of the standard exercises, or at least common enough to have become a stereotype*. I’d like to challenge the fundamental premise though because, when you think about it more closely, being a tree is not particularly relaxing.

Consider the life of an average tree. At any given moment its leafy tissues are being assailed by herbivores, while its woody parts are forever at risk of attack from a range of fungal pathogens. Below ground it doesn’t get any easier — parasitic nematodes swarm its roots. The life of a tree is one of being constantly eaten alive.

Meanwhile the tree is engaged in complex trading relationship with a range of mycorrhizal fungi with their own separate interests. Through these it exchanges hard-won carbon for nutrients, which it decides how to invest to meet its short- and long-term goals. The immediate aim is to survive, making defences crucial, but it can’t neglect growth, otherwise its competitors will swiftly crowd it out. And it has to have some left over at the end to produce flowers and fruits. Reproduction is costly; pollinators won’t visit without some nectar to draw them in, and seed dispersers expect a reward for carrying fruits around. You always have to pay the couriers.

Are you sure you want to be a tree now? It’s not all about swaying in the breeze, feeling the warm sunlight on your leaves, and focussing on your inner strength. That sunlight needs to be converted into cold, hard carbohydrate-based currency, and there’s a lot of business to be done before winter (or dry season) cuts you short. You need to make enough to live off your savings for a large part of the year. And even trees don’t live forever — you’re only one storm, wildfire or beaver away from being struck out of the game.

Just thinking about it is making me stressed. So let’s take another viewpoint — what would a tree make of our human lives?

They might be quite jealous. We spend large amounts of time sitting down in front of glowing rectangles, which provide us with a surplus of resources to spend on leisure activities and relaxation. We barely have any parasites at all. If we want to mate, we get to choose our own, and can move directly to them. There’s no need to barter with insects, or release our hopes into the breeze. We actually get to meet our offspring, and know that they succeeded. Best of all, if we need food, we just steal it from a plant that went through all the actual effort of making it.

In short, being human doesn’t sound that bad after all. I feel much better about it now. Who’d be a tree?

 


* In poking fun at the tree exercise I’m not seeking trivialise the value of mindfulness. Workplace stress is recognised as a health concern by the WHO, and everyone should educate themselves on how to support their own mental health as well as that of colleagues and employees. Nevertheless, reducing the risks of mental illness depends on identifying and dealing with the root causes; stress management exercises can help but they’re not enough on their own.