I like big risks and I cannot lie
First in a series of posts inspired by "Rationalizing Risk Aversion in Science"
I was, my entire academic career, almost uncontrollably attracted to risky projects. At the end of my PhD, when I was choosing a direction to go as a postdoc, I made the calculated move to leave the field of yeast gene expression and go into plant biology. It was a smart decision, though my reasoning at the time was based on a (very, extremely, I can’t emphasize how wrong) impression that there weren’t many plant biochemists. Changing fields was invigorating and there was a great deal of opportunity for me in plant science.
I also chose to apply only to a set of very prestigious plant biology labs for my postdoctoral training—labs that, as my thesis advisor said, “people would have heard of.” This was also a smart decision, given my career goals, and set me up for a lifetime of connection to one of the most influential plant biologists of our age.
However, the way I went about selecting a specific research area within plant biology—that was neither calculated nor smart. As I was interviewing for postdoc positions, I read this paper in preparation for meeting a yeast biologist who was also working a bit with plants. I can still remember reading that paper in my graduate school library, following it up with additional papers on gravity perception in plants, and just marveling that we didn’t know something so fundamental, so basic about plant biology. How could we not know how plants discern up from down??? And, though I didn’t join the Fink lab, I decided that I wanted to identify the plant gravity receptor.
At the time, I didn’t know that excellent scientists across the globe were already trying to understand the molecular genetics of plant gravity response. I also didn’t know that it was proving to be a challenging, if not intractable, problem. [Gravity perception in plants is still not fully understood, though folks are getting closer and closer to a full molecular understanding]. I also didn’t worry that I was doing this work in a lab that wasn’t working on gravitropism but on flower development and plant stem cells, so I was on my own in a lot of ways. The whole project was the very definition of risky.
As you have probably intuited, I was not successful. I tried many, many genetic and expression-based approaches to finding the gravity receptor and they all failed, some immediately and some after years of effort. About 5 years into it, I finally admitted that I needed to stop, and pivoted to an adjacent topic—the molecular identification and characterization of mechanosensitive ion channels (which were often proposed to be gravity receptors). This pivot was almost immediately successful, at least successful enough for me to get a faculty position. And that, dear reader, was the start of a fruitful (and delightful) research topic that I pursued for the rest of my career.
High Risk, High Reward
Picking an technically challenging question, in a field in which I was a novice, while in a lab that did not specialize in that field, is a perfect example of “high risk-high reward” research. I risked—and in fact came very close to—burning out on the lack of results. But I stuck it out, made a few savvy (also lucky) choices, and was rewarded with a career-defining, independent project that I was able to take with me as I started up my own research lab.
Scientists know that risky projects are often (though not always) the ones that will move a field forward. And I do think that the work we did on MSL and Piezo channels was important for the field of mechanobiology. But I would never counsel someone else to do what I did, and in fact have often wondered what my career would have looked like if I’d been more strategic about my research directions. Because during the first FIVE years of my postdoc, and in the first FOUR years of my faculty position, I had nothing tangible to show for all my hard work and long hours except stacks and stacks of lab notebooks stuffed with data and anguished “It didn’t work again” entries. There’s no section on a CV where you can list your the height of your stack of lab notebook. And, as I’ve written before, this lack of demonstrable productivity on my part was almost a career killer.
The problem of invisible effort in risky science
What kept my career alive? I’m going to get there, I promise! But first, let me tell you about a recent paper in PLOS Biology entitled, “Rationalizing risk aversion in science: Why incentives to work hard clash with incentives to take risks”.
Much of this paper was beyond my understanding, at least at first read, so forgive any mistakes (and let me know in the comments if you can clarify!) The authors use mathematical modeling to understand the relationship between risk and effort in scientific research. They conclude that the unseen nature of much of scientific work—the way that you might work on a risky project for years and have nothing to show for it if the tool you are building doesn’t work, or if your hypothesis is wrong, or if experiments prove too difficult to execute—ends up skewing our reward system towards less risky projects.
“Scientists respond [to the non-observability of actions] by working on safe projects that generate evidence of effort but that don’t move science forward as rapidly as riskier projects would.” Gross and Bergstrom, 2024.
Because most scientists know that taking on a risky project means that a great deal of work might end up totally invisible to external evaluators, we understand that a risky project = a risky career. And we get it—nobody likes slackers, and how can you tell if someone is a slacker or just picked the wrong project? From the outside, they look the same. I remember vividly, after realizing I had yet another failed project on my hands, wailing to my postdoc advisor, “But I’m working so hard!”. I felt such relief when Elliot said, “I know you are!”. He must have measured my stack of lab notebooks!
There’s risky and then there’s RISKY
The authors had to make a number of simplifications and assumptions just to produce the equations, which is totally normal, but which also opens up all kinds of questions. For example, the authors admit that one of the simplifications they were forced to make was that all scientists are the same:
“Our analysis makes a number of simplifications . . . Perhaps most substantively, we have assumed that all investigators are alike. In science, researchers differ in many ways that affect how they design their research programs, including their abilities and their predisposition to take scientific risks.” Gross and Bergstrom, 2024.
Different scientists are motivated to different degrees by curiosity/a desire to solve problems/an affinity for the hands-on work. What problem they select is a mixture of personal motivation and a (conscious or unconscious) calculation about what is doable, what is “hot”, and what will be publishable. These differences are a big part of what makes science so effective and so fun to be a part of!
But there is another individual aspect to this which is not fun, and that is the way that the gender, race, ethnicity, and association with prestige of a scientist influences whether others are willing to give them the benefit of the doubt—or if others are unwilling to “see” any work that has not resulted in a publication.
I paid for my risky postdoc interests with failed projects, low publication rates, and a three-year search for a faculty position. BUT—and this is a big but (sorry)—I was, in the end, given the benefit of the doubt. I had famous advisors and respected institutions on my CV, I spoke the language of academia, I was white. I am pretty sure that things would have played out differently if I’d lacked these particular characteristics, characteristics that helped hiring committees empathize with me and see potential that was not at all reflected in my CV.
How to relieve the burden of risky research?
The authors of the PLOS Biology paper mentioned above conclude that the self-organizing nature of science prevents any change to this essentially conservative system, and that the essential tension between risk and invisible effort at the individual level will keep many scientists choosing safe projects. I want to think and talk more about whether this truly is an insurmountable problem. I also want to talk more broadly about the invisible parts of faculty work, and how I think we lean on metrics as a result. Next post: can we make what’s invisible visible through different publishing approaches?
Side note
One irony of this meditation on invisible work in academic science is that I have myself been pretty unseen here (Substack) lately. There are a number of reasons for my disappearance: laziness, burn-out, getting my writing/editing/coaching business off the ground. I have been worrying a fair bit about not meeting readers’ and subscribers’ expectations, and I’m sorry if I’ve let you down. I plan to be more visible going forward!
Discussion Section
If you are an academic, biologist or other—are any of your activities invisible?
How can we shift the effects of risky science off of the individual? Large, multi-group projects are one way, but what about single investigators, and especially trainees?
The truly terrifying thing about science is you NEVER know when your research will discover something NEW. I just watched my son labor hard for 5 tough years in his biomedical science PhD program. It wasn't until year 4.5 that he had the breakthrough that won him a Nature first author publication and his degree. The only sure way to publish a lot is to just nibble around the edges of the known (yawn). Science is a very tough career, even without all the academic politics you write about so well.
As to worrying about not publishing often enough here on Substack, unless you have paid subscribers, I think it's best to write purely when you're inspired to do so. Go with the flow!
Baird
This post reminds me of Sinclair Lewis's "Arrowsmith," one of the only American novels that I know of to dramatize the scientific method. It's a ridiculous book in some ways, but the science feels true, although it intersects with a kind of masculine ideal, as well, of risk taking and superhuman effort in the lab. Maybe not worth reading in full, but it's a literary reference point for your essay.