“However, I am naive enough to hope that if academics, supported by their universities, have an informed debate about IP, OA and academic freedom, that there will be enough agreement to lead us towards something better than the situation we find ourselves in now.”
For the 3-4 regular readers of this blog, you’re probably aware that a while back we published a paper with F1000Research reviewing the evidence behind the societal, economic, and academic impacts of Open Access.
Today, we submitted what I like to think of as the ‘final’ version of that paper. We have taken on an enormous wealth of feedback from the community through formal peer review, comments, open discussion on social media, and personal conversations, and integrated all of this into the manuscript. This discourse has greatly improved the content, and I hope you all find it to be a useful basis for further discussions of Open Access.
I consider this to be the final version, as thanks to this extensive ‘peer review’ I feel there is little more which can be significantly altered. Of course there will always be future developments and debates in Open Access, but rather than adapt the paper fluidly with this, I’d rather consider it to be a good reference point on which to base these discussions.
That does not mean that everything in the paper is perfect. I strongly encourage further discussion and debate on the article itself, continuing the rich comment thread that exists already. If something major comes up that we have failed to include, then we will open up considerations for a new version.
Finally, please do share this paper with your friends and colleagues. It’s such a damn important topic, and well-worth being informed about. Remember, Open Access isn’t about policies, mandates and embargoes – it’s about freedom, equality, and democratic access to our core global knowledge base. That’s something worth fighting for.
Metrics, metrics, everywhere. Not a day goes by in academia without some new metric being designed to measure research assessment, or a complaint about how crap another metric is. There are soooo many studies out there that look at things like how open access influences citation rates or altmetrics, or what the relationship between altmetrics and citations is.
But most of these are large-scale, so don’t really mean anything to the individual researcher. So I decided to have a look at the impact factors, citations, and Altmetric scores for each of my papers (not that many..) just to see what sort of relationships existed between them for any on a personal level.
Where’s the data from? Well, that part’s easy. The Internet.
Embargo periods on scientific research are now fairly commonplace. They are sanctions imposed by publishers on different versions of a research manuscript, often termed the author-accepted manuscript (AAM) or post-print, in order to delay public release of those versions. Typically at this stage, the publishers themselves have had little or no input to the process besides managing the peer review process through volunteer editorial staff.
These impositions now typically exist in the form of embargo policies, in which publishers ‘allow’ researchers to deposit these earlier versions (still peer reviewed) in a public repository of some sort, but with a delay of anywhere between 6-24 months, typically. This is commonly referred to as ‘green open access’, although the original definition of this simply required public archiving in a repository without any mention of embargo periods.
This was originally posted here.
This interview presents the perspectives of an early-career researcher who conducts research, publishes papers, attends academic conferences as part of his PhD, travels to different parts of the world to help educate researchers about open research and science policy, blogs actively, serves as a peer reviewer, and makes time for several other activities including this interview! Jonathan (Jon) Tennant dived head first into palaeontology research, i.e., his first love, even when it required him to change disciplines. And during this journey, he discovered his passion for all things related to scientific communication and policy, especially open science. He is among those researchers who realize the true potential of networking and utilize it to actively participate in dialogue on some of the most critical issues in academic research — all this alongside managing a demanding research schedule. I spoke to Jon about his interests both within and outside research. I particularly wanted to understand how he is able to pursue serious research as well as be involved in other activities, and learned that the primary driving force behind Jon’s work is his passion for science and the need to ensure that more and more people are informed about the most important developments in academic publishing.
Jon is currently a final year PhD (palaeontology) student at Imperial College London in theDepartment of Earth Science and Engineering. His research focuses on patterns of biodiversity and extinction in deep time and the biological and environmental drivers of these patterns. Jon is also passionate about science communication and strongly believes that science should be in the public domain. He takes a deep interest in following and talking about how current trends in open science impact science communication. He also maintains a blog, Green Tea and Velociraptors, and tweetsactively about topics close to his heart.
For this concluding segment of this interview series, I queried Jon about three things:
- What needs to change in the current academic publishing scenario
- What the future of academic publishing looks like
- What advice Jon has for researchers based on his experiences as a researcher and communicator.
Jon talks about the need to strengthen science policy to consider “the best interests of the commons” and for the whole academic community to embrace the concept of transparency in all aspects of research. He also feels that the current research assessment system needs major restructuring. But it’s not all bleak, according to Jon, for he has some optimistic predictions for the future of science communication. He concludes by sharing some valuable advice for early-career researchers.
If you could change three things about research communication/science policy, what would they be and why?
We often talk about “open” as if it has value in itself. This is only a half truth, and the real value comes from what openness gives you, which depends on the context. In science policy, I would love to see more transparency in the decision making process as a result of being more open. What evidence was used? Where? How was the conclusion reached? What conversations/meetings were had that haven’t reached the public record, and what was discussed at them? This is part of transparency for the sake of democratic accountability, and an important part of social policy making to me. For many, science policy is like reading the conclusions section of a paper, but the methods, material and discussion sections are all completely absent!
I would like to see more policymakers and funders acting in the best interests of the commons. For example, if you have a choice to make between preserving the unsustainable profit margins of some scholarly publishers or offering better value for money elsewhere, then you take the second choice. When you have some publishers making 40% profit margins of at least partial public funding, and largely through prohibiting public access to knowledge, you know something is massively wrong with your system. Changing this certainly includes the development of a robust scholarly communications infrastructure (which people like Björn Brembs and Geoffrey Bilder among others are great advocates for) — essentially crafting an efficient, default workflow for the entire research process, including communication. If governments fund this, we could save billions each year and re-inject it into research instead of wasting it on profiteering publishers, and the relatively low-value high-cost services that they offer.
Most of all, though, we need a complete and massive-scale overhaul of our assessment system. It is completely unacceptable that in the digital age we are still defaulting to lazy and inappropriate assessment criteria for whatever reason. This is actually related to the second point, in that if you build an infrastructure where communication of research (i.e., ~95% of the value) is decoupled from both the concept of journal prestige and the traditional publication process, then we should see a movement away from poor evaluation criteria (i.e., those which are journal-based). In almost every discussion I have with junior researchers, this is what it comes back to. Some are being put off doing science openly, or even correctly, because they fear they will be firstly punished by the publishing system, and then penalized by the research evaluation system. It is outrageous and bewildering that we haven’t managed to come up with a practical, systemic solution to this.
Where do you see academic publishing 20 years from now?
I suspect for one that we will see an almost totally decoupling or deconstruction of what “publishing” is. By that, I mean the process will be streamlined to such an extent that the legitimately valuable services such as copy editing or type-setting are automated or outsourced for incredibly low prices, as some journals are already showing (i.e., by embracing the power of the internet and technology). Peer review becomes an open, constructive, and transparent community-driven process, similar to how we see people use StackExchange. Paywalls are non-existent, and we lament that they even once existed. Traditional publishers still exist, but now offer overlay or data-oriented services, because we’ve created a system where the communication of research is entirely independent of publishing. Instead of publishers and journals picking which papers they should publish, they should be paying for the privilege of getting to publish that work. I see copyright reforming such that it is academics who retain ownership of their work, and not being used as an income-engineering tool for publishers. Evaluation is done by the community, for the community — for example, a simple system like StackExchange where the value of content is based on community-wide assessment and how that content is re-used and digested. Instead of researchers being forced to create crudely written papers, we see a decoupling of the “paper” itself – data collectors publish data; communicators define the context; statisticians analyse the data; machines perform massive-scale meta-analysis on the global knowledge corpus, and we ultimately create a platform or series of platforms that leverages what is eminently capable with modern technology.
Of course, none of this will probably happen, because academic culture is the definition of inertia. But we can and should be optimistic, and strive every day to make sure we are doing research for the good of the commons, and not to line the pockets of a few greedy corporations.
Would you like to offer any other advice to early-career researchers?
I can share some of the things I have learned as a young researcher:
- Develop skills beyond being a researcher. Push your horizons, talk with people beyond academia, and take on as much experience and perspective from others as you can. Listening is so much more valuable than speaking.
- Find something that is important to you, and dedicate your time to doing it. And if you love it, then give it your best!
- Also, no matter what you do in academia, you will always end up getting on someone’s bad side. Usually this means that you’re just challenging the status quo, so don’t be afraid of rising to challenges or meeting resistance, but always be as diplomatic as possible.
- Find existing networks who are working on similar things to your interests! Via social media, the power of communities within science has never been more visible, and there are always people out there for you to learn from, collaborate with, and help out if needed.
- Never be afraid to ask questions: this is how we learn and collectively progress. If someone makes you feel stupid for asking an “obvious” question, that is the sort of arrogance that science could do without — we should all be fully aware that we never stop learning.
- Finally, sometimes it does get a bit much alongside all the other things you have to do as a student, so the best piece of advice I can give is learn how to say “no,” and don’t bite off more than you can chew! Things will always get done eventually by someone, and grad students especially are always over-burdened and over-pressured, so you have to manage your time exceptionally well.
Thanks, Jon, for the insights! This was a fantastic conversation. Good luck with your research and writing. Let’s hope some of your predictions come true!
Other parts in the series