Who is science's UCI?

Aug 06 2014 Published by under Academia, life in the lab, role models, science

One of the commenters on Drugmonkey's post on the suicide of Yoshiki Sasai says the following:

I fear that in some subfields, like in Lance Armstrong's bicycling world, the fraud is driving out the real science, and so we can't be as confident that "scientific understanding" will win out (at least, in anything less than the long term during which the moral arch of history also curves towards justice). That's too long.

I agree that there are many parallels between these two types of fraud: the years of doping that Lance Armstrong did, made possible by a huge web of lies and people wanting to believe in this heroic story, and the way people want to believe in "heroic science". The kind that gets you on the cover of Science, gets you the huge grants and gets you the SfN selfies like RXNM pointed out.

It made me wonder if, in science, there is a body like UCI - the cycling union - that fights fraud (although in the Armstrong days it seems like maybe the UCI wanted to believe in the heroic story too). Is there something like that? And if not, do we need it, and what would it look like? Because we can all say that we need to reform science in order to diminish fraud and make sure "scientific understanding" will prevail over the desire to become a science hero, but are we actually doing that? And what would we need to do?

 

6 responses so far

  • Jim Woodgett says:

    We don't have a UCI in science and I'm not sure that would be the sort of organization we would want given their conflicted issues. However, what we do have is a well meaning (but sometimes subverted) push from Retraction Watch and others. However, this, I think, touches only the tip of the iceberg and is, at best, revisionist in that it can only attempt to correct previously published transgressions. At that point the damage has been done and consequences can difficult to predict.

    Instead, we should have much better methods during the actual practice of science. We pat ourselves on the back for our clever experimental designs and controls, but often do not take into account human nature or human error. We also "want" good outcomes, results and statistics so are less likely to question a good result compared with a bad one. How many young trainees have dropped out, convincing themselves that they are simply not good at science because they can't capture the right result (after being continuously pressed for data by their supervisor)? The incentives for cheating (mild or gross, same thing) are perverse.

    If someone is suspected of wrong-doing, our current corrections are often reactionary. Either the suspect is quietly moved on to some other position due to the difficulty in proving fraud or they are prejudged simply by having increased focus on their work (investigations, etc).

    So, we need to build in the expectation of validation throughout the process of science, to normalize cross checking. This isn't so hard to do and many labs practice this. It's JUST GOOD PRACTICE. In other words, we don't need a UCI, we need to get our own houses in order. If we don't, we will justifiably lose the confidence of those who support science.

    • babyattachmode says:

      Well yeah I did not want to start a discussion about whether UCI is a credible organization or not. I guess it's not the best example... But I do wonder why we don't have an international science control agency. For example, we can expect occasional lab-checks from IACUC to check on the animal experiments, but there is no 'danger' of occasional checks whether you are properly conducting science. And of course we SHOULD all be practicing proper science, but in some cases it seems like everyone in line (student - post-doc - PI - etc) has an interest in making the results look nicer than they actually are. Wouldn't it be good if we COULD expect occasional checks by some institution or body to keep us from getting sucked into the 'making up data' black hole?
      Why do we find it so normal that cyclists are subjected to checks to make sure they are not using doping while at the same time we believe scientists should be able to control the process of science on their own?

      • Jim Woodgett says:

        On your cycling point, I totally agree that we need to normalize external/internal controls. Make it part of the normal process. That way it doesn't attract attention - it becomes a standard "audit"*. There are lots of reasons why we don't do this (we hope for best, trust, think the problem is exaggerated,etc), but when a single bad apple comes to light, it can have enormous and costly impact. I also don't get the argument that it's too much effort for a rare problem. The point is that a culture of questioning everything we do in research has always resulted in better science. It's when questions are not asked that bad things can fester.

        * The accountability structures we currently have for science leave a lot to be desired. Usually, they are in the form of grant reports which are typically little more than "what did you spend the money on?" Instead of asking what, we should be asking how.

  • Jim Woodgett says:

    To expand on what I mean by culture of validation:

    1. Lab meetings should be open and no holds barred. Criticism and questions should be expected. There is no place for secrecy or embarrassment in a lab meeting. Neighbouring lab members should be welcome to join in.

    2. Lab books should be open and shared. Primary data should be freely exchanged among the lab members (and other labs, if asked).

    3. If an experiment works as expected, it should predict another experiment which should be performed. This experiment should be done by someone other than the person who carried out the first experiment.

    4. When a manuscript is prepared, it should be shared with other scientists prior to submission and then posted on BioRXiv.

    5. Primary data (reasonable) and key/new reagents should be provided to anyone who asks. Refusal to share reagents should be a red flag.

    6. Data re-analysis by third parties should be enabled, celebrated and encouraged not seen as an intrusion. I'd love to see "these results have been independently verified by XYZ" after a paper, and for that to count as a significant contribution. Ditto for failure to replicate or come to same conclusion.

    Science is messy, we make mistakes. Own it.

    • potty theron says:

      I agree with all of these in principle. But, do I want someone, anyone (CPP or DrIsis, two whom I truly respect in the blogsphere) regulating my lab meetings? absolutely not. I have enough regulation, thank you very much. I remember the wild west days before IACUC and OSHA and RadSafety. There are lots of good reasons for those regulations.

      It would be wonderful if everyone did this sponntaneously. But the people who cheat on these now, will keep cheating with an oversight board.

      Maybe I'm just an old fart. No, correct that. I AM an old fart. Just like my old mentors who said "I'm glad I left the field before they made me do...." I will likely say that about regulations that insist I do these things before I am funded/published/allowed to train anyone.

  • Dave Bridges says:

    Really good suggestions from Jim, I would add that all electronic data storage and protocols should be centralized (and visible to everyone). People should also be encouraged/celebrated to walk back on their hypotheses when the data turns away from their pet theory. Point 3 may not be feasible for smaller labs though, although I agree in principle. My understanding is that in industry, the 'next experiment' is nearly always done by someone else. The complicating factor is that for a postdoc or graduate student to drive a narrative forward it might not be feasible to pass projects in the lab back and forth between people who may not be fully engaged in that area. I am not a fan of narrative-driven science, Im just not sure how this is done in practice in smaller groups.

Leave a Reply