Frontiers in Nature: A potential opportunity to reverse the peer-review process
Talking with fellow scientists, it would seem that most have a love/hate relationship with the current state of scientific publishing. They dislike the fact that getting a Science or Nature paper seems to be the de facto goal of research these days, but don’t hesitate to pop open the bubbly if they achieve it. This somewhat contradictory attitude is not altogether unreasonable given the current setup. The fate of many a researching career is dependent on getting a paper (or papers) into one of these ‘high impact’ journals. And as illogical as it seems for the ultimate measure of the importance of months of research to be in the hands of a couple of editors and one to three peer reviewers, these are the rules of the game. And if you want to get ahead, you gotta play.
The tides, however, may possibly be changing. Many smaller journals have cropped up recently, focusing on specific areas of research and implementing a more open and accepting review process. PLoS ONE and Frontiers are at the forefront of this. Since the mid-2000s, these journals have been publishing papers based purely on technical merit, rather than some pre-judged notion of importance. This leads to a roughly 70-90% acceptance rate (compared to Nature’s 8% and Science’s <7%), and a much quicker submission-to-print time. It also necessitates a post-publishing assessment of the importance and interest level of each piece of research. PLoS achieves this through article-level metrics related to views, citations, blog coverage, etc. Frontiers offers a similar quantification of interest, and the ability of readers to leave commentary. Basically, these publications recognize the flaws inherent in the pre-publication review system and try to redress them. PLoS says it best themselves:
“Too often a journal’s decision to publish a paper is dominated by what the Editor/s think is interesting and will gain greater readership — both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. PLOS ONE will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).”
So we have the framework for a new type of review and publication process. But such a tool is only helpful to the extent that we utilize it. Namely, we need to start recognizing and rewarding the researchers who publish good work in these journals. This also implies putting less emphasis on the old giants, Nature and Science. But how will these behemoth establishments respond to the revolution? Well, we may soon find out. NPG, the publishing company behind Nature has recently announced a majority investment in Frontiers. The press release stresses that Frontiers will continue to operate under its own policies, but describes how Frontiers and Nature will interact to expand the number of open access articles available on both sides. Interestingly, the release also states that “Frontiers and NPG will also be working together on innovations in open science tools, networking, and publication processes. “ A quote from Dr. Phillip Campbell, Editor-in-Chief of Nature is even more revealing:
“Referees and handling editors are named on published papers, which is very unusual in the life sciences community. Nature has experimented with open peer review in the past, and we continue to be interested in researchers’ attitudes. Frontiers also encourages non-peer reviewed open access commentary, empowering the academic community to openly discuss some of the grand challenges of science with a wide audience.”
Putting (perhaps somewhat naively) conspiracies of an evil corporate takeover aside, could this move mean that the revolution will be a peaceful one? That Nature sees the writing on the wall and is choosing to adapt rather than perish?
If so, what would a post-pre-publication-review world look like? Clearly if some sort of crowdsourcing is going to be used to determine a researcher’s merit, it will have to be reliable and standardized. For example, looking at the number of citations per article views/downloads is helpful in determining if an article is merely well-promoted, but not necessarily helpful to the community—or vice-versa. And more detailed information can be gathered about how the article is cited: are its results being refuted or supported? is it one in a list of many background citations or the very basis of a new project? Furthermore, whatever pre-publishing review the article has been submitted to (for technical merit, clarity, etc) should be posted alongside the article along with reviewer names. The post-publishing commentary will also need to be formalized (with rankings on different aspects of the research: originality, clarity, impact, etc) and fully open (no anon trolls or self-upvoting allowed). Making participation in the online community a mandatory part of getting a paper published can ensure that papers don’t go un-reviewed (the very new journal PeerJ uses something like this). If sites like Wikipedia and Quora have taught us anything (aside from how everything your mother warned you about is wrong), it’s that people don’t mind sharing their knowledge, especially if they get to do it in their own way/time. And since the whole process will be open, any “you scratch my back, I’ll scratch yours” behavior will be noticeable to the masses. The practices of Frontiers and PLoS are steps in the right direction, but their article metrics will need to be richer and more reliable if they are to take the place of big-name journal publications for a measure of research success.
Some people feel that appropriate post-publishing review makes any role of a journal unnecessary. Everyone could simply post their papers to an e-print service like arXiv, as many in the physical sciences do, and never submit to anywhere officially. Personally, I still see a role for journals—not in placing a stamp of approval on work, but in hosting papers and aggregating and promoting research of a specific kind. I enjoy having the list of Frontiers in Neuroscience articles delivered to my inbox for my perusal. And I like having some notion of where to go if I need information from outside my field. And as datasets become larger and figures more interactive, self-publishing and hosting will become more burdensome. Furthermore, there’s the simple fact that having an outsider set of eye’s proofread your paper and provide feedback before widely distributing it is rarely a bad thing.
But what then is left of the big guns, Science and Nature? They’ve never claimed a role in aggregating papers for specific sub-fields or providing lots of detailed information—quite the opposite, in fact. Science’s mission is to:
“to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.” [emphasis added].
Nature expresses a similar desire for work “of interest to an interdisciplinary readership.” And given that Science and Nature papers have some of the strictest length limits, they’re clearly not interested in extensive data analysis. They want results that are short, sweet and pack a big punch for any reader. The problem is that this isn’t usually how research works. Any strong, concise story was built on years of messy smaller studies, false starts, and negative results—most of which are kept from the rest of the research community while the researcher strives to make everything fall in place for a proper article submission. But in a world of minimal review, any progress (or confirmation of previous results, for that matter) can be shared. Then, rather than let a handful of reviewers (n=3? come on, that only works for monkey research) try to predict the importance of the work, time and the community itself will let it be known. Nature and Science can still achieve their goals of distributing important work across disciplines, but they can do it once the importance has been established, by asking researchers of good work to write a review encapsulating their recent breakthroughs. That way, getting a piece in Nature and Science remains prestigious, and also merited. Yes this means a delay between the work and publication in these journals, but if importance really is their criteria, this delay is necessary (has any scientist gotten the Nobel Prize six months after their discovery?). Furthermore, it’ll stop the cycle of self-fulfilling research importance whereby Nature or Science deems a topic important, prints about it, and thus makes it more important. This cycle is especially potent given the fact that the mainstream media frequently looks to Science and Nature for current trends and findings, and thus their power goes beyond just the research community. In a system where their publications were proven to be important, this promotion to the press would be warranted.
The goals of the big name journals are admirable: trying to acknowledge important work and spread it in a readable fashion to a broader audience. But their ways of going about it are antiquated. There was a time when getting a couple of the current thinkers in a field together to decide the merit of a piece of work was the only reasonable technique, but those days are gone. The data-collecting power of web-based tools is immense and their application is incredibly apropos here. We have the technology; we can rebuild the review system. NPG’s involvement with Frontiers offers hope (to the optimistic) that these ideas will come to fruition. We already know that Frontiers supports them. We just need the community at large to follow suit in order to give validity to these measures.
one small point, most scientists in physical sciences that post to the arXiv, eventually post to journals after somewhere between 2 weeks and 2 months, basically to allow reference requests to come in, get feedback (i.e. a real refereeing process where dozens to hundreds of interested parties read the article rather than one or two), and to have a waiting period to allow one last round of edits before journal submission.
thanks Jay! It’s interesting to see how different publishing habits have cropped up in different fields.