The Open Access (OA) movement has been around since the 1990s – not surprising, as one of its principal tenets is that information should be freely available online. More specifically, it generally refers to scientific information, and in particular the information generally found in scientific journals. As we all know, this information is generally not freely accessible: rather, it is usually kept for access by journal subscribers, whether they be individuals or institutions.
The debate over whether scientific research should be freely accessible or not is a heated one, with very little sign of a resolution either way anytime soon. Its proponents say that freely available scientific research advances the cause and progression of science. Its detractors says that without journals (most of which are subscription-based), there would be no peer-review process, and hence no quality control. It’s not that simple, however.
Click here to listen to the related podcast – Embargoes in science reporting: Friend or foe?
Perhaps a good place to start is with the inevitable. Michael Nielsen has written a very clear article on the matter, entitled ‘Is scientific publishing about to be disrupted?‘. In it, he argues very convincingly that scientific publishing (including journals) is about to experience the same upheaval that the newspaper/print industries have been experiencing. At the hands of the same phenomenon: the internet. And, just like newspapers, there is relatively little that can be done about the situation.
One of the most important, and perhaps noticeable, agents of this change is scientific blogging: blogs written by scientists about their own and others’ work.
As Nielsen writes:
“Let’s look up close at one element of this flourishing ecosystem: the gradual rise of science blogs as a serious medium for research. It’s easy to miss the impact of blogs on research, because most science blogs focus on outreach. But more and more blogs contain high quality research content.”
They differ greatly from published articles in that they allow scientists to engage in an ongoing conversation about their work and its developments, and are also a valuable means of engaging other scientists in a conversation about their work.
The movement is catching on to such a degree that numerous highly respected scientists are blogging, including Terry Tao, Tim Gowers, and Richard Lipton (list supplied by Michael Nielsen). On home ground, the New Zealand science blogging movement is also picking up pace: there are a number of blogs already in existence, and there are plans afoot to aggregate these bloggers’ work in a project called Sciblogs (based on ScienceBlogs).
“Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, people whose time is at a premium, are spending hundreds of hours each year creating original research content for their blogs, content that in many cases would be difficult or impossible to publish in a conventional journal. What we’re seeing here is a spectacular expansion in the range of the blog medium. By comparison, the journals are standing still.” (Nielsen)
A main feature of the Open Access movement, however, is not necessarily to dissuade scientists from publishing journals (more on that later), or to encourage them to write blogs. Instead, it aims to encourage them to deposit copies of their published papers (pre or post-prints) in repositories which do give free access. Of these, ArXiv is particularly prominent, and has a fantastic physics blog.
A recent issue of the Australian (OA) journal SCRIPTed looks at the issue in a paper entitled “Open Access to Journal Content as a Case Study in Unlocking IP‘. The paper examines the accessibility of reviewed, published papers from examples of the different types of science publishers, including PNAS, Elsevier and a major division of the US NRC.
Interestingly, the paper finds that the lack of access to published papers is not, as one might assume, solely the fault of publishers. Instead, it found that the publishers’ copyright restrictions were (relatively) liberal, in many cases allowing researchers to place their work in repositories of one form or another. The primary reason for the lack of forward momentum was due to the researchers themselves. In the paper’s conclusion:
“The exploitation of the opportunity has lagged, because of impediments to adoption, especially the lack of any positive incentive to self-deposit, and downright apathy. The outcomes to date are disappointing for proponents of OA and Unlocking IP…OA and Unlocking IP in the area of journal articles are at serious risk of being stillborn’.
No doubt, this last sentence is one which would thrill many journal publishers. However, the OA movement and blogging are not the only movements which threaten journals. These previous examples have opposed journals in a relatively passive way – they are (generally) quite happy to co-exist.
There is a far stronger movement which is lining up against journals. This movement, written about in Times Higher Education’s recent article ‘A threat to scientific communication‘ talks of growing unhappiness with publishing papers as the measure of a scientist’s success. An increasing number of (well respected) scientists, including the former editor of the British Medical Journal, says the influence of being published in the ‘major’ journals is far too powerful, and that journal metrics such as the Journal Impact Factor are actually an impediment to scientific progress.
“”(Journal metrics) are the disease of our times,” says Sir John Sulston, chairman of the Institute for Science, Ethics and Innovation at the University of Manchester, and Nobel prizewinner in the physiology or medicine category in 2002.
“Sulston argues that the use of journal metrics is not only a flimsy guarantee of the best work (his prize-winning discovery was never published in a top journal), but he also believes that the system puts pressure on scientists to act in ways that adversely affect science – from claiming work is more novel than it actually is to over-hyping, over-interpreting and prematurely publishing it, splitting publications to get more credits and, in extreme situations, even committing fraud.”
A further comment:
“Noting that the medical journal articles that get the most citations are studies of randomised trials from rich countries, [Richard Horton, editor of The Lancet] speculates that if The Lancet published more work from Africa, its impact factor would go down.
“”The incentive for me is to cut off completely parts of the world that have the biggest health challenges … citations create a racist culture in journals’ decision-making and embody a system that is only about us (in the developed world).””
(Another problem cited is that the JIF, because it focuses only a few years, actually gives no indication of the long-term importance of scientific work.)
Embargoes are also coming under attack (see the recording at the bottom of this page), as it makes science seem more like an event than a linear series of incremental advances. This reminds me quite a lot of Professor Sir Peter Gluckman’s recent comments on the NZ media: what he said very closely matches this criticism, in that he feels that the New Zealand media fails to show science as a gradual process, instead showing it as a series of leaps forward. Which gave me cause to think: is it, then, actually the media’s fault? Particularly here in New Zealand, where many journalists are not able to specialise in science issues, and thus gain an understanding of scientific research’s continuity?
But I digress. In answer to the journals’ primary defense of their existence, the peer review process itself, there is also increased questioning of its use. Journal publishers maintain that the peer review process is the only real means of quality assurance for scientific research. The reactions to this include the following:
- That peer review itself is generally undertaken for free, meaning that journals are taking free work and, essentially, selling it back to scientists.
- The peer review process itself needs to have some of the following questions asked about: who actually does the reviewing? How appropriate are they? How strenuous is the process? And, of course, timing is also an issue (the process can take months, greatly slowing the speed at which research becomes known about).
- In fact, this latter point brings to mind the recent debate over a paper published recently by well-known climate change skeptics, which attributes over 70% of climate change to the El Nino/Southern Oscillation weather patterns. While the paper was peer-reviewed, there have since been rebuttals (including this, yet-to-be-published paper) saying that the maths used was incorrect, and bringing into doubt the quality of the peer review undertaken on the original paper (I’m not commenting on either, please note).
Deep thought also has to be given to the tremendous amounts of research lost because it doesn’t come up with a result. There are two types of experiments which have no end results (and I speak from personal experience here): they were poorly set up, performed or analysed, or there simply are no results to be had.
While the first group should absolutely be ignored, the second can be very important to scientists. We used to say (in the market research consultancy at which I worked for a time) if our analysis turned up nothing that “it’s a learning in itself’. And it often can be, either to prevent other scientists duplicating the same research (a huge waste of time and resources), or because there really is nothing there to see, which suggests that effort be focused in another direction.
The remedy for science publishing’s woes is unclear. While everyone agrees that there is a problem, or at the very least a challenge, nobody is sure what shape the future of science publishing will take.
Michael Nielsen says that scientific publishers need to become technology-driven if they are to survive (he mentions Nature as one of the few publishers trying this), and that they must do so even if it means fundamentally changing the way they currently work.
“In ten to twenty years, scientific publishers will be technology companies. By this, I don’t just mean that they’ll be heavy users of technology, or employ a large IT staff. I mean they’ll be technology-driven companies in a similar way to, say, Google or Apple. That is, their foundation will be technological innovation, and most key decision-makers will be people with deep technological expertise. Those publishers that don’t become technology driven will die off.”
And while it seems that the peer review process is likely to stay, it will no doubt change in form. It might well imitate what PLoS’s policy is, which is to check that the results can be substantiated by the methods and data, but not to worry about whether it is original or even important – this should be up to the world at large to decide.
Of course, something else to consider is this: if a paper is published in a repository or on a scientist’s own website/blog, and is then commented on by his peers…Is this not exactly what the peer review process is anyway? In that case, why be concerned with publishing?
However one looks at it, the industry is in for a massive upheaval: while it is uncertain just how, we can be sure that those trying to innovate to stay ahead of it may survive, but those that stand still will, like their newspaper counterparts, face extinction.
Note: The Royal Society of New Zealand conducted some research in journal use/publication in 2004. The results are here.
This is a great post, thank you for providing such a well balanced view of the issue.
A list of OA mandates for different countries can be found here: http://www.eprints.org/openaccess/policysignup/. The list is interesting in that NZ has a single open access mandate associated with theses at the University of Canterbury, and no mandates from the Funding bodies. (directed to this by Peter Suber)
As for the role of impact factor in career development, a recent statement was made by the International Respiratory Journal Editors (behind a paywall: http://ajrcmb.atsjournals.org/cgi/content/full/41/2/127) that “the impact factor calculated for individual journals should not be used as a basis for evaluating the significance of an individual scientist’s past performance or scientific potential”. (link via Peter Binfield /Bora Zivkovic)