This is awesome! No more (change code -> save plot -> open file)^n just to make sure your plot is readable
This is awesome! No more (change code -> save plot -> open file)^n just to make sure your plot is readable
For journalists who want to dig into stories about scientific integrity: Here's an exciting new opportunity from two of my favorite science-journalism organizations, @retractionwatch.com and @theopennotebook.bsky.social. I'll be joining a webinar to help kick things off on March 26!
π oh my, oh my. I didn't mention that use case because I thought I'm the only one. Excel is great for that!
I am full of surprises. I don't really use it for research. I use it for (1) tracking things that I don't want to log in an app because of privacy reasons (e.g., weight, mortgage, watering plants), and (2) quick checks of potential data fabrication in Qualtrics questionnaires.
I received this as a gift. I love it!
We should definitely read more outside our own fields, and make friends with people who know stuff that we don't
Thank you for the added info. "Intent to treat" was the term I knew about. Selection into treatment and different types of non-compliance are things I learned about in microeconometrics classes, but that I rarely see mentioned in experimental methods work. It's unfortunate
I never heard of "per protocol" analysis before. I guess that this is not a term used by the causal inference literature, right?
Weird, but I didn't get any party invitations by email. I would like to issue a correction and ask for party invitations to be sent via a GitHub Pull Request.
bsky.app/profile/anam...
Weird, but I didn't get any party invitations by email. I would like to issue a correction and ask for party invitations to be sent via a GitHub Pull Request.
bsky.app/profile/anam...
Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copyβpaste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable β¦
Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!
www.psychologicalscience.org/publications...
Write your experiments and analysis code in such a way that a STAR editor at Psych Science highlights your paper. Hats off to @dillonplunkett.bsky.social who did a great job both on the science and diligently coding & documenting everything. Check out our methods to see our reproducibility approach.
During the checks, @dillonplunkett.bsky.social was very responsive and any required changes were quickly implemented. All in all, a very positive experience π I hope to see this many more times in the future
I want to highlight this paper as one of the most smooth repro checks I've done as a STAR editor. There are 7 studies reported in the main paper, so plenty of data and code to work with. The authors had done a great job and the repro package was already quite good at the start of the checks (1/n)
Are you having a fun Sunday estimating an APC model in MPlus?
Maybe this is meant to bring all of us `tibble` fans together
Instant follow. There's hope!
Please stop, I am not sure I can take more news
It's ok, I am an adult, I can deal with this. Different people have different preferences. I like heterogeneity.
Wrapping up my trilogy on how to put together a replication package: The Return of the Code i4replication.org/a-researcher...
#econsky #openscience
Really cool talk
qed "I am fun". Send me your party invitations by email
Also, if the 88% is accurate, that must be based on past data. So that implies that emails are sent out in batches, and people get updated % , right?
Is this an RCT and you're in the 88% group? I wonder what % or "nudge" other people got.