New TIER2 preprint! 🔔
📖 The Reproducibility Promotion Plan for Funders provides a structured set of recommendations to support funders in promoting reproducible research practices.
🔗 osf.io/preprints/me...
New TIER2 preprint! 🔔
📖 The Reproducibility Promotion Plan for Funders provides a structured set of recommendations to support funders in promoting reproducible research practices.
🔗 osf.io/preprints/me...
A new PNAS paper finds that polarization increased immediately after the release of Lady Gaga’s “Just Dance” and the advent of the late-2000s electro-pop era, which both appeared around the same year, 2008.
My belated annual roundup of things we learned about peer review in 2024. Including structured peer review, reviewers' uncertainty, & "uselessly elongated review bias." Now online at @plos.org
absolutelymaybe.plos.org/2025/04/28/5...
Featuring @mariomalicki.bsky.social @aidybarnett.bsky.social
📅On 27th November 2024, TIER2 and Taylor & Francis Group hosted a workshop at the 19th Munin Conference on Scholarly Publishing in Tromsø, Norway, addressing research reproducibility. A report on the workshop is now available on TIER2's website.💡
🔗Learn more here: tier2-project.eu/news/tier2-a...
Sadly spent most of this very sunny day writing a narrative CV for a grant application. Jeez, that's a lot of extra work that I'm not sure anyone is likely to read too deeply.
What do others think of them?
A new TIER2 preprint, authored by Serge Horbach, Nicki Lisa Cole, Simone Kopeinik, Barbara Leitner, Tony Ross-Hellauer and Joeri Tijdink, explores the barriers and enablers for reproducibility in research.
🔗Learn more: tier2-project.eu/news/new-tie...
"Conflation of synthetic #GenAI and real data could corrupt the research record; degrade the #quality and #reproducibility of scientific data and analytical methods; and, ironically, sabotage the training of AI models." www.pnas.org/doi/10.1073/... #synthetic_data #research_integrity
Was reminded of this ultimate Venn today.
New paper by Thomas Klebel "Investigating patterns of knowledge production in research on three UN sustainable development goals", just published in Online Information Review! doi.org/10.1108/OIR-...
While I see (again) many new persons here, here is the meta-research and open science starting pack 2.
New Preprint! "Reproducibility and replicability of qualitative research: An integrative review of concepts, barriers and enablers" - osf.io/preprints/me...
A nice ouput from our TIER2 project, led by Nicki Lisa Cole :)
6/ 📚 Read the full study:
🔗 “Open Science at the Generative AI Turn”
Published in Quantitative Science Studies (MIT Press):
👉 doi.org/10.1162/qss_...
Let’s work together to ensure that #AI & #GenAI align with the values of #OpenScience!
5/ 🌍 A Call for Responsible Use
To ensure GenAI aligns with Open Science values:
- Researchers must integrate GenAI with care and scrutiny.
- Developers need to create transparent, unbiased tools.
- Policymakers must balance innovation and risk.
4/ 🔍 The Risk
Despite the potential, there are challenges:
❌ Opaque “black box” models undermine transparency
❌ Bias in training data risks reinforcing inequalities
❌ High computational demands raise sustainability concerns.
3/ ✨ The Opportunity
GenAI can:
✅ Increase efficiency of enhanced documentation
✅ Simplify complex science into accessible language
✅ Break language barriers through translation
✅ Enable public participation in research
✅ Promote inclusivity, accessibility, and understanding.
2/ TL;DR
2/ TL;DR. Mohammad Hosseini, Serge Horbach, @kristiholmes.bsky.social and I explore GenAI's enormous potential to enhance accessibility and efficiency in science. But we emphasise that to do so, GenAI must bespeak Open Science principles of openness, fairness, and transparency.
Screenshot of paper "Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities" by Mohammad Hosseini, Serge P. J. M. Horbach, Kristi Holmes and Tony Ross-Hellauer Crossmark: Check for Updates Author and Article Information Quantitative Science Studies 1–24. https://doi.org/10.1162/qss_a_00337 Abstract Technology influences Open Science (OS) practices, because conducting science in transparent, accessible, and participatory ways requires tools and platforms for collaboration and sharing results. Due to this relationship, the characteristics of the employed technologies directly impact OS objectives. Generative Artificial Intelligence (GenAI) is increasingly used by researchers for tasks such as text refining, code generation/editing, reviewing literature, and data curation/analysis. Nevertheless, concerns about openness, transparency, and bias suggest that GenAI may benefit from greater engagement with OS. GenAI promises substantial efficiency gains but is currently fraught with limitations that could negatively impact core OS values, such as fairness, transparency, and integrity, and may harm various social actors. In this paper, we explore the possible positive and negative impacts of GenAI on OS. We use the taxonomy within the UNESCO Recommendation on Open Science to systematically explore the intersection of GenAI and OS. We conclude that using GenAI could advance key OS objectives by broadening meaningful access to knowledge, enabling efficient use of infrastructure, improving engagement of societal actors, and enhancing dialogue among knowledge systems. However, due to GenAI’s limitations, it could also compromise the integrity, equity, reproducibility, and reliability of research. Hence, sufficient checks, validation, and critical assessments are essential when incorporating GenAI into research workflows.
1/ 🚨 NEW PAPER! “Open Science at the Generative AI Turn”
In a new study just published in Quantitative Science Studies, we explore how GenAI both enables and challenges Open Science, and why GenAI will benefit from adopting Open Science values. 🧵
doi.org/10.1162/qss_...
#OpenScience #AI #GenAI
Its presumptuous but I dont mind that so much - but what I really dislike is when I need to use those details and validate the new account just in order to decline.
8/ Read the full paper here for insights on how to reshape research evaluation systems for fairness and effectiveness: doi.org/10.1093/rese...
7/ We close with recommendations: clarify core purposes of research assessment, use shared frameworks, train assessors on bias, reduce over-frequent assessments, and move beyond binary thinking on qualitative/quantitative methods.
6/ We examine the “performativity of assessment criteria,” revealing a tension between rigid/flexible criteria and how transparently they are communicated. Transparent, equitable frameworks are vital to align formal criteria with the realities of research evaluation.
5/ Respondents noted that beyond metrics, informal factors—social dynamics, politics, and demographics—play key roles in assessment outcomes. These hidden criteria emerge in opaque processes, granting assessors significant flexibility.
4/ Through qualitative analysis of free-text responses from 121 international researchers, we highlight a major gap between formal evaluation criteria and their practical application.
3/ How do current systems enable “hidden factors” like cronyism or evaluator biases, and how might these change under proposed reforms? Our study examines researchers' perceptions of social and political influences on assessment processes.
2/ Reform of research assessment, especially to avoid over-quantification and empower qualitative assessment, is a hot topic. Change is coming. But how do we balance broader criteria to value activities beyond publishing/funding, peer review reliance, and merit-based rewards?
1/ The full paper is available at: doi.org/10.1093/rese...
New Paper! “Understanding the social and political dimensions of research(er) assessment: evaluative flexibility and hidden criteria in promotion processes at research institutes”, just published in Research Evaluation by me, @naubertbonn.bsky.social & Serge Horbach. Thread below!
Beware of the #streetlight effect in measuring #openscience #impact @tonyrh.bsky.social #munin2024 conference
Brilliant overview of #openscience by @tonyrh.bsky.social #munin2024 conference: particularly liked the “unintended consequences “ side, (me, not Tony) thinking about the 12290$ APC asked by #SpringerNature for one single article to be #openaccess - not equitable at all.
Fantastic thread, thank you Bart!