Maybe we should look for Norwegian-Australian funding =)
@achetverikov
Associate Professor in Cognitive Psychology at the University of Bergen, Norway. I study decision-making and biases in perception and visual working memory, with occasional forays into higher level decisions. https://andreychetverikov.org
Maybe we should look for Norwegian-Australian funding =)
Good idea! I'll ask the editor if they are willing to consider it if resubmitted this way. I think the problem is that their pipeline assumes responses from authors as well, which would be a bit odd here if the commentary will go to the original paper authors. But maybe framing can help.
I'm curious about humans failing prompt injection tests. Is it just because they are responding randomly?
The sky is not falling; high-quality platforms (Prolific, Verasight, CR Connect) have low rates of apparent bots. osf.io/preprints/ps... But also not zero; vigilance is very much needed!
As a side note, after several interactions with the PNAS editorial office to get my response submission through the really weird formal checks, I learned that PNAS doesn't accept responses to commentaries. So it's likely to stay as an eternal preprint unless someone wants to publish this?
I guess so! Here it is also averaged from lag 1 to 40, so I'm sure if we take it for the lag-1 only, it will be stronger, but much individual variation remains. Here are the same two studies using ACF with lag-1 only.
Look forward to it! If you do develop a bot that can go through the experiments, please do not release it into the wild =) I do like online data collection and would hate to go back to lab-only studies.
Well, I agree about perfectly Gaussian, but we don't see it in their data. And re lack of autocorrelation, see my response letter: bsky.app/profile/ache...
Again, an absolute zero autocorrelation would not be possible, but a very low one is quite common.
I also argue that creating bots for online studies is not trivial and likely economically unfeasible. At least I can't create a bot that would go through my own study in reasonable time. So: their 'bots' are likely humans with pretty normal patterns, present online and offline. Safe for now! 5/5
The same 'suspected' bots can be readily identified in previously collected offline data from Hedge et al., 2018 (Fig 1E-H), and while we can suspect that some undergrads or our lab colleagues have unusual behavioral patterns, they are not likely (yet!) to be driven by AI. 4/5
Autocorrelation function plots for 3 suspected bots
In fact, a lack of autocorrelation highlighted in the commentary (and shown here for three suspected bots) is very common in their data (Fig. 1D). 3/5
Their suspected bots likely show common observer strategies (random or very slow deliberate answers, Fig. 1A). 'Bots' identified based on one parameter don't look like bots on others. There are also no signs of mixed distributions that would hint at bots (Fig. 1B-D). 2/5
Recently, van der Stigchel and colleagues posted a provocative commentary suggesting that we should be wary of bots in online behavioral data collection (π§΅by @cstrauch.bsky.social here: bsky.app/profile/cstr...). But should we? Here is my response letter osf.io/preprints/ps.... 1/5
Can I ask, what does it do? Is it just explaining the book / manuals?
Yes, and the pesky individual differences! Can't we at least see things similarly?
@ecvp.bsky.social when is the abstract submission deadline? The website mentions early bird registration deadline but not the one for abstracts submission.
Government-Funded infomercial from Norway striking back at US corporations and the tech Bros for filling the Internet with slop.
youtu.be/T4Upf_B9RLQ?...
Join our lab in Geneva, as a postdoc working on #workingmemory, with both Jarrod Lewis-Peacock and myself !
In support of #OpenScience, we routinely ask authors to openly share their #research #code before publication.
We are now formalizing this practice with a mandatory #CodeSharing policy and clarifying what we mean by code sharing.
Well, judging by the audience here, I might be in the minority, but loops are often not the best way to do things in R. As with matrix ops, sure, you can often write them as loops, but should you? Vectorized ops are faster _and_ cleaner. And high-level wrappers like lapply help to condense the code.
cool people, follow them!
I built a bluesky labeler for neuroscience methods.
1οΈβ£ follow/subscribe to: @neuromethods.bsky.social
2οΈβ£ like the post with your favorite method
β‘οΈ get a shiny methods label in your profile/posts. π
Awesome work by @sohafarboud.bsky.social! It was a difficult project starting with "let's replicate some animal work" and ending up being so much more.
Our new paper is out in Cognition! What determines whether confidence follows the classic "folded-X" pattern vs. the "double-increase" pattern? The answer lies in the type of stimulus manipulation. Big thanks to my advisor Doby @dobyrahnev.bsky.social and co-first author @herrickfung.bsky.social !
Preferred presentation styles differ. I think it would be very difficult to vibe-code what I would considered a great presentation - minimum text, plots building up element by element, clear question -> answer structure, no purely decorative distracting elements...
Shhhhh academia still runs on this illusion. How many leading young scholars are below 40?
In principle, I agree but it's hard to report when error comes and goes and there are no informative msgs.
OSF has been misbehaving for me recently, a lot of "unexpected errors". Would be nice to see a simpler, less error-prone design.
The Board of Governors decided, unilaterally, that no published textbook in the field of sociology could be used in compliance with the law for an Intro to Sociology class. None. Victor: There's not a single existing textbook on the market that could be used that would qualify under state law? Zachary: Correct. Victor: I'm sorry, that's kind of funny. Like, the absurdity of not a single sociology textbook getting past the censors. I mean, it makes me kind of proud of our colleagues, but...
I do want to shout-out my fellow sociologists, who have collectively created a discipline so woke that not a single one of our introductory textbooks can make it past Florida's censors.
Great work everyone.
I consider bots in this type of tasks very implausible, and implausible claims require strong evidence. From a practical perspective, it raises (yet again) unfounded suspicions about online data, so I already imagine reviewers asking data to be cleaned from imaginary bots.
With all due respect to the authors, isn't it a bit early to raise alarm? People are different, some don't show "standard" effects. And if you take a bunch of parameters, you'll find some unique patterns. I mean, very strong conclusions based on 36 participants with 2-6 (!) suspected bots.