I like big risks and I cannot lie
First in a series of posts inspired by "Rationalizing Risk Aversion in Science"
I was, my entire academic career, almost uncontrollably attracted to risky projects. At the end of my PhD, when I was choosing a direction to go as a postdoc, I made the calculated move to leave the field of yeast gene expression and go into plant biology. It was a smart decision, though my reasoning at the time was based on a (very, extremely, I can’t emphasize how wrong) impression that there weren’t many plant biochemists. Changing fields was invigorating and there was a great deal of opportunity for me in plant science.
I also chose to apply only to a set of very prestigious plant biology labs for my postdoctoral training—labs that, as my thesis advisor said, “people would have heard of.” This was also a smart decision, given my career goals, and set me up for a lifetime of connection to one of the most influential plant biologists of our age.
However, the way I went about selecting a specific research area within plant biology—that was neither calculated nor smart. As I was interviewing for postdoc positions, I read this paper in preparation for meeting a yeast biologist who was also working a bit with plants. I can still remember reading that paper in my graduate school library, following it up with additional papers on gravity perception in plants, and just marveling that we didn’t know something so fundamental, so basic about plant biology. How could we not know how plants discern up from down??? And, though I didn’t join the Fink lab, I decided that I wanted to identify the plant gravity receptor.
At the time, I didn’t know that excellent scientists across the globe were already trying to understand the molecular genetics of plant gravity response. I also didn’t know that it was proving to be a challenging, if not intractable, problem. [Gravity perception in plants is still not fully understood, though folks are getting closer and closer to a full molecular understanding]. I also didn’t worry that I was doing this work in a lab that wasn’t working on gravitropism but on flower development and plant stem cells, so I was on my own in a lot of ways. The whole project was the very definition of risky.
As you have probably intuited, I was not successful. I tried many, many genetic and expression-based approaches to finding the gravity receptor and they all failed, some immediately and some after years of effort. About 5 years into it, I finally admitted that I needed to stop, and pivoted to an adjacent topic—the molecular identification and characterization of mechanosensitive ion channels (which were often proposed to be gravity receptors). This pivot was almost immediately successful, at least successful enough for me to get a faculty position. And that, dear reader, was the start of a fruitful (and delightful) research topic that I pursued for the rest of my career.
High Risk, High Reward
Picking an technically challenging question, in a field in which I was a novice, while in a lab that did not specialize in that field, is a perfect example of “high risk-high reward” research. I risked—and in fact came very close to—burning out on the lack of results. But I stuck it out, made a few savvy (also lucky) choices, and was rewarded with a career-defining, independent project that I was able to take with me as I started up my own research lab.
Scientists know that risky projects are often (though not always) the ones that will move a field forward. And I do think that the work we did on MSL and Piezo channels was important for the field of mechanobiology. But I would never counsel someone else to do what I did, and in fact have often wondered what my career would have looked like if I’d been more strategic about my research directions. Because during the first FIVE years of my postdoc, and in the first FOUR years of my faculty position, I had nothing tangible to show for all my hard work and long hours except stacks and stacks of lab notebooks stuffed with data and anguished “It didn’t work again” entries. There’s no section on a CV where you can list your the height of your stack of lab notebook. And, as I’ve written before, this lack of demonstrable productivity on my part was almost a career killer.
The problem of invisible effort in risky science
What kept my career alive? I’m going to get there, I promise! But first, let me tell you about a recent paper in PLOS Biology entitled, “Rationalizing risk aversion in science: Why incentives to work hard clash with incentives to take risks”.
Much of this paper was beyond my understanding, at least at first read, so forgive any mistakes (and let me know in the comments if you can clarify!) The authors use mathematical modeling to understand the relationship between risk and effort in scientific research. They conclude that the unseen nature of much of scientific work—the way that you might work on a risky project for years and have nothing to show for it if the tool you are building doesn’t work, or if your hypothesis is wrong, or if experiments prove too difficult to execute—ends up skewing our reward system towards less risky projects.
“Scientists respond [to the non-observability of actions] by working on safe projects that generate evidence of effort but that don’t move science forward as rapidly as riskier projects would.” Gross and Bergstrom, 2024.
Because most scientists know that taking on a risky project means that a great deal of work might end up totally invisible to external evaluators, we understand that a risky project = a risky career. And we get it—nobody likes slackers, and how can you tell if someone is a slacker or just picked the wrong project? From the outside, they look the same. I remember vividly, after realizing I had yet another failed project on my hands, wailing to my postdoc advisor, “But I’m working so hard!”. I felt such relief when Elliot said, “I know you are!”. He must have measured my stack of lab notebooks!
There’s risky and then there’s RISKY
The authors had to make a number of simplifications and assumptions just to produce the equations, which is totally normal, but which also opens up all kinds of questions. For example, the authors admit that one of the simplifications they were forced to make was that all scientists are the same:
“Our analysis makes a number of simplifications . . . Perhaps most substantively, we have assumed that all investigators are alike. In science, researchers differ in many ways that affect how they design their research programs, including their abilities and their predisposition to take scientific risks.” Gross and Bergstrom, 2024.
Different scientists are motivated to different degrees by curiosity/a desire to solve problems/an affinity for the hands-on work. What problem they select is a mixture of personal motivation and a (conscious or unconscious) calculation about what is doable, what is “hot”, and what will be publishable. These differences are a big part of what makes science so effective and so fun to be a part of!
But there is another individual aspect to this which is not fun, and that is the way that the gender, race, ethnicity, and association with prestige of a scientist influences whether others are willing to give them the benefit of the doubt—or if others are unwilling to “see” any work that has not resulted in a publication.
I paid for my risky postdoc interests with failed projects, low publication rates, and a three-year search for a faculty position. BUT—and this is a big but (sorry)—I was, in the end, given the benefit of the doubt. I had famous advisors and respected institutions on my CV, I spoke the language of academia, I was white. I am pretty sure that things would have played out differently if I’d lacked these particular characteristics, characteristics that helped hiring committees empathize with me and see potential that was not at all reflected in my CV.
How to relieve the burden of risky research?
The authors of the PLOS Biology paper mentioned above conclude that the self-organizing nature of science prevents any change to this essentially conservative system, and that the essential tension between risk and invisible effort at the individual level will keep many scientists choosing safe projects. I want to think and talk more about whether this truly is an insurmountable problem. I also want to talk more broadly about the invisible parts of faculty work, and how I think we lean on metrics as a result. Next post: can we make what’s invisible visible through different publishing approaches?
Side note
One irony of this meditation on invisible work in academic science is that I have myself been pretty unseen here (Substack) lately. There are a number of reasons for my disappearance: laziness, burn-out, getting my writing/editing/coaching business off the ground. I have been worrying a fair bit about not meeting readers’ and subscribers’ expectations, and I’m sorry if I’ve let you down. I plan to be more visible going forward!
Discussion Section
If you are an academic, biologist or other—are any of your activities invisible?
How can we shift the effects of risky science off of the individual? Large, multi-group projects are one way, but what about single investigators, and especially trainees?
This post reminds me of Sinclair Lewis's "Arrowsmith," one of the only American novels that I know of to dramatize the scientific method. It's a ridiculous book in some ways, but the science feels true, although it intersects with a kind of masculine ideal, as well, of risk taking and superhuman effort in the lab. Maybe not worth reading in full, but it's a literary reference point for your essay.
As a former material scientist, that too working in a large research group (and lab) that emphasized experimental work, nearly all the work we did, often negative findings, still resulted in publications. Was everything earth-shattering? Absolutely not. However all met the core premise of
1. a hypothesis, such as adding Ceria to Aluminosilicates will increase their toughness (that's a good thing, if true)
2. experiment, can we validate the hypothesis and if so over what space/range of compositions
3. observations & insights -> real progress or newer hypothesis that might asymptotically lead us there (if not Ceria, does Stronia do it? why or why not? If Ceria does, why does it do it? What microstructures lead to what physical/mechnical properties? (new hypothesis - if so can such microstructures work with other matrixes with other dispersions)
Interestingly depending on the research lead and their motivations (like you I too for lucky to work with one of the most successful and well connected electron microscopists of that era) these experiments needed a breadth of experts (the chemical folks to make the composites, the mech folks to measure their properties, the characterization folks (us) who could decipher the underlying microstructures and of course the theoriticians who would hypothesize before and/or after to explain what we learnt - rinse and repeat. Of course we had our share of huge failures as we tried to find those elusive superconductors, metallic liquids that were useful and ceramic composites that would behave metallic. A long winded way of saying in specific (particularly experimental niches) the effort and outcomes seemed easier to demonstrate even if they didn't deliver the silver bullet we sought.
In contrast I know theoretical material scientists did not have it anywhere as easy and as some of the other commenters have noted humanities lies at the far reaches of this. Though I suspect a lot of science makes no sense to anyone outside academic or the research niche, unlike a lot of humanities seems far more accessible to the lay public, as long as we don't let the darn social scientists use their jargon but speak in plain English (John Green and Salman Khan have helped hugely!)
oops, longer than I intended to be. Hopefully it makes sense.