This is so cool because even a complete R coding idiot like myself can follow along. Thank you so much!!
@marcuscrede
Grumpy IO psychologist with an interest in research methods, meta-science and personality. Views are my own and not those of my employer. Begrawe my hart op Klein Tambotieboom en strooi my as oor die Bosveld horison
This is so cool because even a complete R coding idiot like myself can follow along. Thank you so much!!
Yup! I have an early draft of their original paper and it is nuts to read that. Carney sent it to me when I contacted her to inquire about the non-significant chi-square value that they reported as significant. That's when the whole power posing crisis started. Somehow I was framed as the bad guy.
Important to remember that neither the original power posing researchers nor almost all of the replicators could be bothered to include a neutral pose control condition and that any effective on feelings was almost entirely due to the -ve effects of a contractive pose.
Ha! The original Lancet article on the dangers of reading in bed is here: doi.org/10.1016/S014...
If you contact an APA journal to note that multiple papers by the same authors claim that 2+2=7 and that all of their inferences are based on this claim what will follow is a year-long "investigation" followed by a correction noting that 2+2=4 but that the inferences are unchanged.
Science!
I just received an e-mail suggesting that I follow Snoop Dog on LinkedIn. Does this mean that I am cool?
True but I also would not have wanted to witness Trump and the Ayatollah resolving conflict the way that Bonobos do.
Definitely. I am very confident that other journals would publish this. I was naive enough to think that the journal that published the meta-analysis would see that they should publish what is essentially a major correction to work that they published.
Good point! We've appealed already but asking for a retraction might not be a good follow-up.
Today the APA journal that published that meta-analysis rejected our commentary - primarily because our findings were not interesting enough. What the actual fuck!
We recently submitted a commentary on a very influential meta-analysis. We found that: 1) 40% of relevant literature had not been identified because of lazy search, 2) a few large N included studies did not meet stated inclusion criteria, and 3) that almost all sig. moderator findings were wrong.
Users of survey data, lovers of DAGs, and general methodological enthusiasts, gather round!
I'm so excited to share this new paper, joint work with my brilliant colleagues @rjsilverwood.bsky.social, @pwgtennant.bsky.social, and Liam Wright.
🧵
Is there no informed consent form for this? This feels very dodgy. Is there IRB approval for this study?
Pfft. Mathematically impossible results are not a problem at all. Simply send them to the Journal of Management. They give awards for that kind of stuff.
i paid around $2.5 a few days ago.
Plan to take the next day off from work if you are going to get the pneumonia vaccine. Ask me how I know.
Amazing!!
Thank you! Will do my best to integrate this into my yearly research methods course.
The lack of uptake of sequential analysis shows how irrational scientists are, and how their methods are driven by norms. Sequential analyses give you more flexibility and are more efficient than a single hypothesis test, and yet, they are still very rare. lakens.github.io/statistical_...
This is the most incredible footage of blue whales I’ve ever seen
The best way to make a small fortune is apparently to start with a big fortune and buy an old house. Unfortunately, I did not start with a big fortune.
This is completely bonkers.
Like a public autopsy - just more gruesome.
Been keeping track - 8 more review invitations accepted since this post (11 days ago).
A case of nominative determinism in research focus among academics?
www.tandfonline.com/doi/full/10....
But can you do that via a correction or does it require a retraction?
What should journals do for papers that: 1) examine an interesting question, 2) but get the analysis wrong so that the conclusion changes from "evidence of effect" to "no evidence of effect" when error is corrected, and 3) the title of the paper states "evidence of effect".
Can a title be corrected?
The article states that the data is "openly available" but it turns out that this is not the case. Why?
Babbage, 1830, discussing the problem that scientists selectively report findings that they want to be true.
Confirmation bias is a strong human tendency. This is why we need to design science in a way that prevents conformation bias from leading us away from the truth.