Cool poster!
If the effect is robust, could you calculate the financial costs of increasing the statistical power in ESM studies by 1%? (And then perhaps create a little interactive ✨ Shiny app ✨ so researchers could calculate it themselves?)
Cool poster!
If the effect is robust, could you calculate the financial costs of increasing the statistical power in ESM studies by 1%? (And then perhaps create a little interactive ✨ Shiny app ✨ so researchers could calculate it themselves?)
Is there any documented procedure how to implement this for multilevel cfa?
If anyone has any ideas how the items could be improved, please let us know.
I'm not planning on improving them, but someone else might 🙃
Sorry...except for the momentary quality of online solitude. That scale doesn't work at all :)
Our interpretation is that the scales aren't completely useless, but they need revision, if anyone wants to use them.
See the preprint yourself: osf.io/preprints/ps...
When we assessed the scales' measurement invariance across several groups, we found that none of the scales function equally among the tested subpopulations.
We used multilevel confirmatory factor analyses first on the whole sample (N = 1,913 adolescents), which yielded positively looking results. However...
We assessed the structural validity of four ESM scales measuring: the quality of current social company, the quality of current online company, the quality of in-person solitude, and (!) the quality of online solitude.
Are you an ESM item measuring the quality of social experiences looking for validation? 👀
Because if you are, I've got some bad news for you. Our new preprint is out: osf.io/preprints/ps...
@gudruneisele.bsky.social @ginettelafit.bsky.social @lisapeeters.bsky.social @oliviajkirtley.bsky.social
For anyone interested in reading it, here is the link:
benjaminkunc.substack.com/p/farewell-d...
I will be glad for any thoughts you might have. Enjoy!
Later, I found myself coming back to some of the thoughts I've had about the current state of psychology and its metascience. Since I wanted the post to be a conclusion of my psychological journey, I felt I needed to write it all down.
When I attempted to write down the reasoning behind it, I realized it is too long for a regular LinkedIn post (or even a bluesky thread!). As a result, I ended up with a full Substack blog consisting of two main parts. The first part is about (you guessed it) why I dropped the PhD.
The decision to drop my PhD might be surprising to some of you who weren't lucky enough to run away before I started rambling about methodology and psychological metascience.
First of all, I want to thank for the invaluable supervision and advice I got from Olivia J Kirtley, Gudrun Eisele, and Ginette Lafit throughout my PhD, and the whole Centre for Contextual Psychiatry for the opportunity to work with such amazing colleagues.
Farewell, dear psych
This year, I have made the difficult decision to end my PhD project, move from Belgium back to Czechia, and take an indeterminate break from psychology.
"Altogether, these findings point to the strength of most contemporary psychological research and suggest academic incentives have begun to promote such research. However, there remain key questions about the extent to which robustness is truly valued compared with other research aspects."
It's nice to find this post a few moments after discussing paper mills, academic incentives, and peer review.
Wow. The correlation of replication success with IF seems to be positive, while it's negative for citations. That's the opposite of what I expected.
Join our workshop "Reproducibility Made Easy: Open Resources for All Disciplines"! 🧪🔬
📅 Date: 6th May 📍 KU Leuven Open Science Day
Dive into cross-disciplinary #openscience resources and boost your research #reproducibility!
Sign up: tinyurl.com/tfs5n7kb
Learn more: tinyurl.com/28pcr324
IMHO, many researchers (implicitly) assume that positive results imply a successful measurement process, leading to the intuition that a thorough validation is unnecessary in such cases.
This would be reasonable, if we could trust our findings. Which doesn's seem to be the case. 4/4
There's also a slightly edgy take on finding positive results and the criterion validity of the scales used. 3/4
One of the points is that if one is about to commit factor analysis, it's best to first check the validity evidence based on content and response processes. Otherwise, one could end up with compelling, yet meaningless, statistical results. 2/4
👀Blogpost on measurement👀
I would have rather concluded that we don't really need factor analysis and can just rely on vibes (or previous literature). But here we are: "Factor analysis: Overrated, Misused, But Still Useful." 1/4
The first preprint from my PhD is out: osf.io/preprints/ps...! 🥳
We explored the temporal dynamics of four careless responding indicators (response time, within-beep standard deviation, an inconsistency index, occasion-person correlation) in ESM data across different samples.
Thread below🧵
People have already blamed science reform for what is happening.
For 15 years I have said: If we do not get our shit together (less publication bias, higher quality, more coordination) someone else is going to implement change top down, and we are not going to like how they do it.
And here we are.
We're thrilled to introduce the Reproducibility Leuven Journal Club to the Bluesky!
Our mission is to foster discussions on #reproducibility in research.
Stay tuned for more info about our upcoming journal clubs.
Let's build a vibrant #OpenScience community together!
Super cool project! Workshops on applied LaTeX, multilevel SEM, using web APIs, and more - a perfect toolkit for quantitative social science researchers
mic drop
Commonly understood theories can be properly discussed and allow for rigorous measurement, which I think are necessary for building cumulative psych science.
If you're interested in this, I definitely recommend both papers!
link.springer.com/article/10.1...
psycnet.apa.org/doiLanding?d... 5/5
However, economics shows that formalization can make our assumptions about complex systems explicit, and constrain our reasoning.
This can be particularly useful for establishing a common understanding and reducing the omnipresent psychological vagueness. 4/5