Category Archives: academia

Who gets NIH grants and is it a problem?

rise-fall-dominant-few-2

from http://grantome.com/blog/rise-fall-dominant-few

Drugmonkey posts a graph from Grantome.org showing distribution of grants by institution. A series of follow-up posts at Grantome includes one that looks beyond the top 50 institutions and concludes:

The NIH website reports that more than 80% of its budget goes to over 2,500 universities and research institutions. Yet for the R01 – which is arguably the most sought-after, prestigious grant that is the financial staple of many medically oriented research laboratories in the U.S. – the bulk of the grants are distributed to 4% of these 2,500 institutions. It therefore appears that the division of R01 grants among institutions parallels that of wealth among individuals in the U.S. population. It is highly unequal, with the bulk of the pool held by a lucky few.

Drugmonkey asks both in the comments and on twitter:

50 institutions get 60% of the #NIHGrants. Good, bad, or meh?

The distribution of grants is interesting, but it leads to the obvious question: what should it look like? I don’t think that is knowable from the data provided, and it may not be knowable, period. The description of the top group as “a lucky few” suggests a perspective from a particular point of view. As with the reactions to the Alberts PNAS paper (which is now taking comments btw) scientists/PIs tend to view the question from the perspective of what is good or “fair” to scientists. But  it is important to remember that just as pro-market is not the same as pro-business, what is good for Science is not always the same as what is best for scientists. Or, as I tell graduate students about the importance of working hard: the NIH is not supposed to be a welfare program for smart people. I usually add what Bob Sauer used to say to every new person in the lab: you don’t have to be that smart to do Biology. If you were really smart you’d be doing math or philosophy or shit like that.

The correct but painful question then is not whether PIs are being treated fairly, but rather whether federally funded science agencies are doing a good job of optimizing the taxpayer/citizen return on investment in research. In the case of NIH, the investment is in biomedical research (preferably a big tent definition of biomedical, IMO). In general, while I still tend to think that the peer review systems at NIH and NSF are like democracy (terrible but better than most the plausible alternatives), I do worry that the system is getting worse at approximating an ideal system for maximizing the taxpayer investment in research. But I am not sure whether a system with the nonexistent perfect program officers , SRAs, and reviewers would increase inequality or decrease it.

Lots of interesting reading at the links, in any case.

Human subjects and education research

At Retraction Watch, there’s a story of a paper about ethics training retracted due to IRB human subjects protocol problems. We tend to think of human subjects research as involving things like drug trials, but a lot of it is things like this:

This was an IRB-approved paper-pencil study investigating how certain features of ethics case studies influence knowledge and application of case study principles to new ethical scenarios.

In other words, if I’m understanding the post and the excerpts, the investigators were studying what works and what doesn’t work for how to teach higher levels in Bloom’s taxonomy in the field of teaching about ethics. So what went wrong?

Several administrative issues influenced our Institutional Review Board’s decision to not allow the data from this study to be used for research purposes. One of these had to do with the fact that some of the course instructors were not listed as key study personnel and they handed out and collected the study materials and informed consent forms. Even though they did not have any other involvement in the study, we recognized this oversight. Additionally, we implemented two minor changes to study materials, including dropping two items and renumbering 8 items, and did not obtain re-approval for these changes. Lastly, through this review process, we became aware that roughly half of the informed consent forms (ICFs) were not on file. Although we kept a clear record of who consented and who did not through the use of a training checklist, we recognized this was a data storage lapse. We worked with our IRB to fix these problems and have better processes in place to prevent similar issues from occurring in the future. Although the senior editor for the journal did not think that these issues warranted retraction of our paper, our university’s decision that we could not share the data publicly influenced our decision to voluntarily retract the article.

I understand the basis for the IRB/Human subjects rules, but this case illustrates a problem for all of us as educators. Every time I teach any course, I am doing an experiment in what works. It’s usually an uncontrolled experiment with unconscionably bad record-keeping, but it’s still an experiment and my students are subjects. As there is an increased emphasis on doing more rigorous assessment of student learning outcomes, we will try to be more rigorous about gathering and analyzing data from assessment instruments, which can be done without IRB approval as long as it’s not for publication. Consider how different that is from a drug trial!

As I understand it, doing a proper human subjects protocol for assessment of teaching requires outlining ahead of time all the interactions the assessor will have with the students/subjects. Deviation from that protocol (analogous to changing an FDA approved drug manufacturing protocol) is a no-no – as in the case above. This makes it almost impossible for the actual instructors to also do the assessment… and creates a barrier to dissemination of interesting STEM teaching methods being tried by scientists who don’t have the right collaborators for assessment.

Can we make Science more family-friendly?

A couple of weeks ago, Jon Eisen tweeted a link to an article in the Atlantic: For Female Scientists, There’s No Good Time to Have Children

Over the past decade these issues have come to the attention of universities in the United States and abroad. Many sensible policies have been introduced in an attempt to make academia more family friendly. Two of the most common are tenure-clock stoppage and parental leave. Although these interventions are important, they are not enough on their own. They raise numerous complications, but in the interest of brevity I’ll name only two. First, these policies need to be entitlements, rather than special accommodations that have to be requested and approved. Second, they need to be available to and used by men and women alike.

I was talking to some of my colleagues with young children at a conference last week, which reminded me of the linked piece. Tenure clock extensions and parental leave are good things. However, there is a limit to how much this kind of institutional support can do if taking time off affects the productivity needed to get and keep funding.  That means that stronger parental leave benefits are may not be used by either men or women even if they are available.

There are things that could be done by both employers and the broader community to be more supportive of parent/scientists. Unfortunately, some of these things, like making day care more available and accommodating can run into regulatory hurdles. For example, when I was an organizer of the Phage Meetings I had a lot of discussions about how to make day care more available for conference participants.  On the one hand, the local day care center could not take additional children without violating local rules on the ratio of caregivers to children.  On the other hand, liability issues meant the meeting could not recommend potential external day care providers.  I was under the impression we could not even help organize parents to share childcare with each other.

For some participants the partial solution for meetings was to hire their own extra child care, either on site, or to look after kids left at home with the other spouse.  I know of a couple of universities that offer benefits to support extra child care so parents can attend conferences… but don’t tell parents about them.

 

 

WP PubMed Reflist

One of the reasons I started my old blog was to use it as a front end for managing our department website.  Prospective graduate students generally don’t think my friends at MIT haven’t published for the last 5 years based on their website, but people with bicoastal biases might actually believe that about Texas A&M.  Manual updates are a pain.  Faculty send you information in different formats, when they bother to respond at all.

baker pubs July 2013

 

When I was in charge of graduate recruiting, I also wrote some PHP scripts to automate the management of our departmental website, in an attempt to break the cycle of revision via Dreamweaver.  One of the features I was most proud of was automated updates of faculty publication lists from PubMed via NCBI’s EUtils web service.  Now that we’ve migrated to WordPress, I wrote a wordpress plugin WP PubMed Reflist to supply that functionality.  I’ve now made it available on WordPress.org and I was pleased to see that the updater here pulled the latest update, which adds PMC and Full Text links.

For comparison, Tania Baker’s website at biology.mit.edu as of today: vs what the plugin does:

  1. Yien, YY, Ducamp, S, van der Vorm, LN, Kardon, JR, Manceau, H, Kannengiesser, C et al.. Mutation in human CLPX elevates levels of δ-aminolevulinate synthase and protoporphyrin IX to promote erythropoietic protoporphyria. Proc. Natl. Acad. Sci. U.S.A. 2017; :. doi: 10.1073/pnas.1700632114. PubMed PMID:28874591 .
  2. Olivares, AO, Kotamarthi, HC, Stein, BJ, Sauer, RT, Baker, TA. Effect of directional pulling on mechanical protein degradation by ATP-dependent proteolytic machines. Proc. Natl. Acad. Sci. U.S.A. 2017; :. doi: 10.1073/pnas.1707794114. PubMed PMID:28724722 PubMed Central PMC5547649.
  3. Baytshtok, V, Chen, J, Glynn, SE, Nager, AR, Grant, RA, Baker, TA et al.. Covalently linked HslU hexamers support a probabilistic mechanism that links ATP hydrolysis to protein unfolding and translocation. J. Biol. Chem. 2017;292 (14):5695-5704. doi: 10.1074/jbc.M116.768978. PubMed PMID:28223361 PubMed Central PMC5392565.
  4. Hall, BM, Breidenstein, EB, de la Fuente-Núñez, C, Reffuveille, F, Mawla, GD, Hancock, RE et al.. Two isoforms of Clp peptidase in Pseudomonas aeruginosa control distinct aspects of cellular physiology. J. Bacteriol. 2016; :. doi: 10.1128/JB.00568-16. PubMed PMID:27849175 PubMed Central PMC5237113.
  5. Baytshtok, V, Fei, X, Grant, RA, Baker, TA, Sauer, RT. A Structurally Dynamic Region of the HslU Intermediate Domain Controls Protein Degradation and ATP Hydrolysis. Structure. 2016;24 (10):1766-1777. doi: 10.1016/j.str.2016.08.012. PubMed PMID:27667691 PubMed Central PMC5061557.
Search PubMed

Tenure surprises

Via Jonathan (@phylogenomics) Eisen on Twitter: The Chronicle of Higher Ed has an article on fear and loathing among the untenured. Overall, the content is good, common sense stuff: mentor your junior faculty, hire with the expectation of promotion, give frequent feedback, don’t discourage creative teaching by overemphasizing numerical student reviews.  What led me to want to inaugurate the new Blogs for Industry around this article is the idea that tenure decisions should not be surprises.

Glenn R. Sharfman, Manchester’s dean of academic affairs, says, “I don’t want there to ever be any surprises when someone comes up for tenure. They should know where they stand.”

The idea that there should not be surprises in any group decision-making should not just apply to promotion and tenure.  Surprises mean we have not adequately planned for reasonably foreseeable contingencies.  So I don’t want there to be surprises either, and promotion and tenure is an area where surprise reduction is a good thing.

This post is just to point out that reducing surprises is a goal, but it is not the only goal. This is because there are bad ways to reduce the surprises as well as good ways, and getting to zero at all costs tempts us to resort to some of the bad ways. The desire to remove subjectivity can unintentionally send a message to the pre-tenured faculty that there is a hard and fast checklist: I need to get X grants and pass a threshold for log2(publications x impact factor). I don’t know anyone who literally makes the decision that way, but I have seen junior faculty interpret their feedback as if that’s the hidden, secret meaning of the entirely conventional advice we give them to get funded, publish more, and promote your work at an appropriate number of conferences.  At institutions like mine, we also see versions of this problem described at the Volokh Conspiracy, where candidates ignore the explicit advice they are given based on their (probably incorrect) perception of standards at institutions higher in the usual rankings. After all, why listen to the schlubs who are actually going to vote on your promotion when you can get advice from superstars elsewhere. Dear life sciences Asst Profs everywhere:

 

  • You might want to notice whether the stars advising you are actually participating in promotion decisions at their own institutions.  Great scientists are not necessarily good faculty builders
  • Your unfinished manuscript in preparation for Science, Nature or Cell wouldn’t get you promoted at Harvard either.
  • If it’s not close, there’s nothing to fear.

We can and should strive to identify and suppress things like implicit bias.  But the review for promotion and tenure is necessarily subjective. Many of the attempts to remove subjectivity just outsource it, in some cases to unidentified grant panelists and journal editorial staff.