9 Comments

“How does it twist our relationship to our science?” Well, as you point out, it certainly creates terrible incentives for corner cutting and sloppiness. We’re only a week out from the president of Stanford university resigning because at least 5(!) papers from his research group had to be retracted due to concerns about scientific validity and data manipulation. He was deemed unaware of the specifics, but I *guarantee* he created an intense atmosphere of “get it done, I don’t care how, and I don’t want to know the details.” That is the dirty secret of how some huge labs somehow miraculously maintain multimillion dollar NIH funding for 30-40 years: ruthlessness. And when the findings inevitably fall apart, it seriously hurts science. The public loses trust in research, lots of money was wasted investigating dead ends, were farther behind looking for cures, terrible

Expand full comment
author

Indeed! Some of this comes from the PI for sure. When I was in graduate school, I was presenting in lab meeting and I said “this experiment didn’t work!”. My advisor corrected me--the experiment worked, you just didn’t get the result you wanted. This was an important lesson and one of the reasons I admire her so much (she now runs HHMI!)

Expand full comment

Absolutely! Science needs to find a way to address the negative results and replication problems. Perhaps finding some way to get "partial credit" of what a paper would be considered for submitting well done negative results to MedArXiv or a similar repository could help? A huge part of the literature is distorted by the combined facts of: (1) "publish or perish" + (2) there is a huge selection bias to publish sexy, new or controversial findings, not be a parade of people going behind the pioneers and proving their study was NOT a fluke

Expand full comment
author

Definitely some of the new micropub and preprint activities will help with this. The infrastructure is there, now PIs (especially old ones) need to teach the same lesson I learned in grad school. And they need to mean it!

Expand full comment

The same mentality prevails in literary studies. I should note that longer review periods post COVID really hurt younger faculty. When I was just starting out, I had some desk rejections for essays that were later published. It’s frustrating when something so important is so subjective and often beyond your control.

Expand full comment

I recently came across this tweet from Devang Mehta and I think this would be useful to a lot of people: https://twitter.com/drdevangm/status/1681881831287709697. The concept of "narrative CVs" would likely reduce the expectation that the candidates (for appointments/promotions/grants, etc.) are just their papers and would help highlight other achievements that have stood out for them. And I am really excited about the modular publishing platforms like Research Equals (https://www.researchequals.com/) and Octopus (https://www.octopus.ac/). These take away the notion that publications are just results and give due credit to every aspect of research from hypothesizing to data generation and reviewing of the data. It is notable that Octopus is an effort by publicly-funded agencies in the UK (UKRI and Jisc) and so it is quite commendable (we can also possibly expect the UK institutions to value these in the future).

Change is in the reckoning albeit slowly. It can be accelerated by us being part of that change! :) :)

Expand full comment
author

These are interesting developments, but I have yet to see them used in any meaningful way. I hope you are right, and that change is happening!

Expand full comment

Sorry to post twice, but I find myself of two minds on this subject. Early in my career, I published because I had to, and that meant scouring my brain for paper ideas, then torturing them into publishable form. That really sucks, and it didn't improve my teaching one bit.

However, maybe six years ago I started a new research project on neuroscience and literature. That agenda advanced the way research ought to. I had real burning questions, knew how to leverage scholarship to answer them, and felt I was contributing something new to Cather Studies. If I'd stayed in my former position, that work would have assuredly led to a monograph. But it took me until my early 40s to have the clarity about research that would make it enjoyable and useful to others.

The problem in literary studies is that there is incessant production for the reasons you describe (the imperative to prove yourself, to measure your worth with those CV lines). But there is no commensurate consumption of that work. I know that what I publish on Willa Cather will be read by a handful of experts, and I usually hear myself cited once or twice when I go to a seminar (as I did this summer). One of my essays on the 1918 influenza in Cather's war novel, One of Ours, has attracted some broader attention. But quite a lot of my early work was me making an argument because I had to get published to get a job. That's a really poor basis for creating new knowledge.

At the risk of going on, I once dated an audiologist who said she had to sift through a lot of bad data among publications in her field. I remember being really flabbergasted by that. Such a scientific and technical field, and yet people were publishing unreliable data? It must have been for the reasons you describe.

Expand full comment
author

I was talking to a friend about this--the deep disparity between the impact a paper has on your career and the impact it has on your field. Most papers are read by a very few. The optimistic view is that it's all incremental; the pessimistic view is that it's all garbage.

Expand full comment