Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 12 August 2016

To improve research, improve peer review

I'd suggest that the situation can be improved with a combination of changes to journals, publication and CV culture.

Currently the options for publishing are essentially limited to a main journal or Nature or Science, the latter two being accorded greater prestige. Not sure about Science, but Nature is no longer held in particularly high regard by many astronomers. Still, while the idea of assessing the quality of research is largely trying to quantify the unquantifiable, perhaps we can at least try and quantify things in a better way than in the current system.

Research falls into various different categories. Some studies are purely observational catalogues designed for future work - they present and measure data, but say nothing themselves about what they mean. Other papers are the opposite, using no new data collected by the authors themselves but rely purely on other people's previous work. Many are a mix of the two. Some papers which do try and interpret the data do so from a pure observational perspective while others use nothing but analytic or numerical modelling, while a few use both. And then there are these "replication studies" (not sure that's the best term) which deliberately try and see if the previous conclusions stand up to a repeat analysis - usually using new methods or different data rather than literally replicating exactly what the previous team did.

Currently journals do not distinguish these (or other) different sorts of research in any way. A published paper is a published paper, end-of. OK, many journals also publish letters (short, usually slightly more important/urgent findings) as well as main articles, but that's it. A few journals are slightly more suitable for catalogues as opposed to new theories, but there's no strict demarcation as to which journal to publish different sorts of studies in.

But perhaps there should be - or if an entirely new journal is too much, perhaps there should be different divisions within journals. E.g. there's MNRAS and MNRAS Letters, why not also MNRAS Catalogues, MNRAS Modelling, MNRAS New Ideas I Just Thought Up And Would Very Much Appreciate It If Someone Else Could Test For Me, Thanks. In this way it would be easier to look at an author's CV and determine not just how much research they do, but what sort - are they mainly collecting and cataloguing data, thinking up new interpretations, testing previous research, lots of different things, what ? A wise institute will hire people with a diverse range of skills, not just the ones who publish the most papers of any type. And it will hire some of the extremes - people who only do observations, only simulations - as well as from the more usual middle ground.

Labelling the research won't help without a corresponding change in how research is valued, e.g. how much it matters on your CV. All the different sorts of research is valuable, but a finding which has been replicated is much more significant. Far from being the least important, as in, "let's check just to make sure", it should be subjected to the strictest form of peer review. A paper verified by and independent replication study should be held in much higher regard than one which hasn't (of course some findings can't be practically replicated - e.g. no-one's going to repeat a project that took five years to complete, so let's not go nuts with this).

At the same time, stifling novel ideas should be the last thing anyone wants. A good researcher is probably not one whose every paper is verified - that probably means they just haven't had any interesting ideas. You want a mixture, say, 50%. Vigilance in the peer review system would stop people from gaming it, e.g. by deliberately publishing a mixture of mediocre and crackpot research. However, the notion that only verified findings matter needs to be broken. Yes, if a paper repeatedly fails to stand up to scrutiny that line of inquiry should be abandoned - but that doesn't mean the idea wasn't a good one at the time.

Maybe all this will even help with the silly grant systems which are in place that assess projects based on number of papers. If a project produces five papers which contain new ideas but no actual independently replicated findings, maybe that project isn't as good as one which produced three papers with a mixture of observation, theory and interpretation. Or then again maybe we should just end the silly grant system entirely, because it's silly.

https://www.youtube.com/watch?v=42QuXLucH3Q

11 comments:

  1. 1: Journals for replication studies.
    2: Negative results must be published.
    3: Metastudies to provide an experimental measure for significance, rather than the arbitrary ones used so far.

    ReplyDelete
  2. Andreas Geisler 
    1) Not sure about having separate journals, probably this is better done by subject matter so just make divisions of existing journals.
    2) Tricky... because not every positive result gets published ! I can't find the number but IIRC something like 50% of all ESO data is never published. Often results in conference proceedings never make it to publication. You'd think that's something "publish or perish" would avoid, but surprisingly it doesn't. Too many things to do I guess. But there does need to be encouragement to publish negative results as well as positive ones. Not sure how best to incentivise that.
    3) Yep.

    Also, keep the peer review optionally anonymous but publish the reviewer-author exchanges as online supplements so the review can itself be reviewed.

    ReplyDelete
  3. Rhys Taylor 1) Or, one could obligate journals to bring replication studies for anything they publish. If nothing else in an online form.

    ReplyDelete
  4. Andreas Geisler Interesting idea. It's surprising to me that RS isn't more of a thing in other fields - it's the chance to disprove someone wrong, which ought to appeal to anyone's ego... maybe it should be considered accepted practise that like invitations to review a paper, a journal could request you to do a replication study every 5-10 years. Visible and extremely rigorous standards of peer review would give this a certain prestige, thus encouraging institutes to support researchers spending significant time on projects not of their own making. Anyone citing the original paper should be obligated to cite the replication papers.

    ReplyDelete
  5. ... sitting on the outside of academia, having run away from it, having watched my father slowly and unhappily digested in its guts - I do have a few ideas. They're based upon the Roman concept of the collegium, a sort of corporation, more of a union than anything else. Over time, it became the "college".

    Since when did "publish or perish" become the yardstick for success? And what's led to the current logjam? I say it's Reed Elsevier and the publishing firms. If there's to be any fundamental change for the better, it must be at the peer review level. I'm a great believer in making people sign their names to their work: put reviewers' names on the papers, too.

    ReplyDelete
  6. Dan Weese Well, one reason I think that a more detailed system of labelling papers would help is that then people could more easily judge whether people publishing five papers a year are really doing ground-breaking research or just doing salami publishing. Five papers with totally unsubstantiated claims, or incremental improvements to existing research, look a lot worse than a single really thorough analysis. Which is not to say that we don't need the incremental papers - we do, but not all research is equal. If we started recognizing that, perhaps people would prefer to spend longer accumulating results before publishing them.

    I''d be very reluctant to abandon anonymity for peer review for a number of reasons. Anonymity means a postdoc can freely challenge the results of a senior professor with no concern over how they'll be regarded in the community. Reviewers don't have to worry if they're seen as supporting the mainstream consensus or prefer an alternative, so they're free to say what they really think. I say keep the reviewers anonymous but make the dialogue public. That way everyone can judge whether the reviewer was too harsh, too lenient, too lazy, or doing a good job.

    ReplyDelete
  7. Rhys Taylor Well, yes, to a degree that anonymity is a good thing. But that's why a collegium system would be an improvement on the current ill-paid and often ill-considered review process. The reviewer speaks in the name of the collegium, exactly as a union apprentice electrician's work is reviewed by a journeyman and the journeyman's work reviewed by a master electrician. Academics wander about,

    "I saw pale kings, professors too,
    Pale postdocs, death-pale were they all;
    They cried—‘La Belle Dame de Elsevier
    Hath thee in thrall!’"

    It's bosh. Academia must seize back control of its own disciplines, by whatever means necessary. It seems to me the bottleneck is the review process. The acid test of any sound scientific endeavour is its willingness to survive the review process. Improve that, put the impudent courtiers of the doorkeeper journals back in their corners. They're gumming up the works.

    ReplyDelete
  8. Dan Weese It's not that bad. The current system has many strengths as well as flaws. Having just one or two reviewers ensures rapid publication or rejection (unless the reviewer it an utter bastard), I'm not sure it's a good idea to increase is much beyond that level. You'll always find flaws in any published paper - with more reviewers you'll spot more flaws, which might lead to a better result but will definitely slow the process down. Plus I'm not at all sure it would lead to better results anyway : too many cooks and all that. More reviewers will just mean more conflicting opinions. Better to just publish and let the wider community make their own judgement.

    If academia needs to "take back control", a la Brexit, it's not from the journals. They don't bear any direct responsibility for the pressure to publish. In astronomy I'm pretty happy with the review process I've received - IMHO there are enough safeguards in place. Authors can request to avoid certain reviewers. They can also ask for a different reviewer if they feel the one selected for them isn't doing a good job (on rare occasions journal editors do this of their own volition). And if all else fails they can submit to another journal. Reviewers are free to reveal their identity if they wish.

    No, if there's a problem in academia it comes form the funding model. In some quarters science is assessed as though it was a business, hence people try to do ridiculous things like using publication rate as a measure of scientific quality. If, however, journals were to give some kind of label of at least what type of research was done, that might just circumvent the issue. Institutes would be able to say, "we hired this person because his publication record is closest to what the position requires, even though the publication rate isn't that high". Grant agencies would be able to say, "You only published one paper, but it included theory, observation and interpretation, and was subjected to a replication study so it's clearly of the highest quality". As it is now it's number of papers and citations, and that's it. Some recognised standard of research quality beyond publishing in Nature could potentially avoid the publish-or-perish culture while keeping the bean counters happy.

    ReplyDelete
  9. If we put aside the issue of tracking and crediting the work of scientists, are papers the best way of disseminating ideas and methods, collecting data, and performing analysis?

    ReplyDelete
  10. Rhys Taylor That's all true, in the current process model. Unless I'm missing the point here (which happens a lot! Very embarrassing, too) you've mooted changes to get scientists and academics off the gerbil wheel of Getting Published in a Premier Journal. And yes, it is about the funding model, because the current model rewards some asinine metric called "publication rate", like some boozer looking under the streetlamp for his keys because that's where he can see the tarmac. The un-quantified (and probably un-quantifiable) metrics of quality and significance are shoved into some Bed of Procrustes and converted to the useless "citation count" metric.

    The bean counters could be a scientist's best friend. Bean counters understand process rather better than people suppose because they understand the components of Value. Also Intangible Assets.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...