As someone who had to submit an ethics review application this week to analyze publicly available existing data, I can co-sign that Canadian regulations can make what feels straightforward more complicated than one might expect.
As someone who had to submit an ethics review application this week to analyze publicly available existing data, I can co-sign that Canadian regulations can make what feels straightforward more complicated than one might expect.
New: "Effect of Artificial Intelligence on Learning: A Meta-Meta-Analysis" by Wagenmakers and colleagues revealing evidence for "severe publication bias and extreme between-study heterogeneity" in existing meta-analyses of the effects of AI on learning: osf.io/preprints/ps...
New blog post, inspired by the excellent recent qualitative paper by Makel and colleagues: On the reliability and reproducibility of qualitative research.
I reflect on how I will incorporate realist ontologies in my own qualitative research.
daniellakens.blogspot.com/2026/02/on-r...
New #RSOS paper: βDonβt hate the players, hate the gameβ: qualitative insights from education researchers on questionable and open research practices. Read more: doi.org/10.1098/rsos... @mattmakel.bsky.social @sarahcaroleo.bsky.social @jesse-fleming.bsky.social @bryancook.bsky.social
The Wikipedia page for Mozart Effect had a whole section on its popularization but don't overlook the section just after it on political impact.
Mozart effect?
54 weeks later, this preprint is now published: royalsocietypublishing.org/rsos/article...
Need some tips for searching PsyArXiv? Wrote a blog post blog.psyarxiv.com/2026/01/27/h...
You may be eligible for $700 each! But that takes a phone call (I have had less productive phone calls)
Do you know the reason for the original delay? In Canada passengers can be eligible for compensation if the delay is not weather related: Compensation for flight delays and cancellations | Air Passenger Protection share.google/TOYuIi0pTs5N...
Making a List Requires Checking it Twice: A Call for Empirical Evidence in Characteristics Lists journals.sagepub.com/doi/10.1177/...
Something I learned: some German colleagues did not know the title was a reference to a song!
Popular rendition: www.youtube.com/watch?v=uXK4...
My fav version: www.youtube.com/watch?v=76WF...
It is a dereliction of duty and a violation of the public trust if the research community misrepresents a thinly supported set of assertions as though they are well grounded in empirical support. It is the responsibility of the research community to rigorously evaluate what is known and what still needs further investigation. If there is a rich and well-sourced body of empirical research supporting characteristics lists, the research community needs to highlight this strength while making the benefits of such lists more transparent to parents and practitioners. If there is not a well-sourced body of support, the research community needs to act before it advises. Or it needs to advise with clearer caveats. Absent data, the research community risks its reputation and the value that it provides to society.
Whatβs needed?
Before we start using characteristics lists to make decisions or inform others, any list must:
A. Gather Empirical Evidence Including Prevalence Rates from gifted and typical students
B. Connect Lists With Specific Definitions, Domains, and Identification Practices.
For example, βhaving two eyesβ is surely quite common in gifted students. But because the prevalence rate is likely indistinguishable from non-gifted students, it is not a particularly useful descriptor of gifted students.
Limitation 6: Lack of Awareness of Prevalence Rates.
Lists typically do not cite: primary research, prevalence rates, or compares gifted and non-gifted students. To call a feature a βcharacteristicβ, we must know the prevalence of that feature in both gifted and non-gifted students.
Limitation 5: Self-Fulfilling Prophecy and Bias.
Any list created based on observation will reflect all biases that were part of the initial identification process.
Limitation 4: Lack of conceptual clarity.
Many items are vague and difficult to differentiate from other items. jingle/jangle fallacies: Are βhigh level of language developmentβ and βhigh level of verbal abilityβ different characteristics or different examples of the same latent characteristic?
Limitation 3: Lack of Alignment With Identification and Selection Practices. Different ID criteria ID different different students. eg: the top 1% of students is not the same as the top 10%. Bc dif practices identify dif students we canβt assume lists developed using 1 set of criteria generalizes
Limitation 2: Lack of alignment with Definitions and Domains.
Different definitions of giftedness identify different students. Just like basketball coaches ID different students than theater directors. Any list using one definition/domain does not automatically generalize to others
Limitation 1: lack empirical evidence.
Many lists give no development background or provide anecdotes. Others that do include citations donβt cite primary research. And many that do cite primary research cite things like case studies or research that did not compare gifted with typical students.
My argument: To be useful, characteristics lists must be based on more than good intentions. They need empirical support. Without it, lists will not help schools and can exacerbate inequity and distrust in research. Calling something a characteristic is a privilege that must be empirically earned.
Characteristics lists are everywhere. Government agency websites, school district sites, advocacy groups, popular press books, textbooks, and across the internet. But what about the evidence supporting these lists?
New Publication alert: Making a List Requires Checking it Twice: A Call for Empirical Evidence in Characteristics Lists [https://journals.sagepub.com/doi/10.1177/00169862251392934] #OpenAccess
Job opportunity β Junior Professorship in Psychological Metascience @zpid.bsky.social leibniz-psychology.onlyfy.jobs/job/10kku5n7 h/t @bethclarke.bsky.social
π Individual: @simine.com, psychologist at @unimelb.bsky.social & editor-in-chief of Psychological Science, is recognized for pioneering methodological rigor, reproducibility & collaborative research, driving initiatives such as @improvingpsych.org & the journal Collabra @ucpress.bsky.social. (2/5)
Job posting alert! Open Science Specialist at the University of Calgary (in Canada!) #job #OpenScience careers.ucalgary.ca/jobs/1704502...
Whereas I grew up in Michigan and never heard of it until I got a job at a southern restaurant that called it egg in a basket.
π
Mark your calendars for SIPS 2026! π
The SIPS 2026 in-person conference will take place in Washington, DC, June 8-10, 2026.
The conference will be at the George Mason University Arlington, Virginia campus (thanks to @natonge.bsky.social, our 2026 local host!)
Job opportunity: postdoc for the National Center for Research on Advanced Education: apply.interfolio.com/175953 PhD in education or a related field? Have strong quantitative and mixed methods methodological skills? Experience with advanced education, gifted education, or talent development? Apply!