Not sure I buy that it has to be gradient to be serious. E.g. some sort of threshold of self-representation as a sufficient condition for consciousness.
@vilgothuhn
Confused PhD student in psychology at Karolinska Institutet, Stockholm. GAD, ICBT, mechanisms of change. Organizing the ReproducibiliTea JC at KI. Website: https://vilgot-huhn.github.io/mywebsite/ Personal blog at unconfusion.substack.com
Not sure I buy that it has to be gradient to be serious. E.g. some sort of threshold of self-representation as a sufficient condition for consciousness.
>ok but what is Americas strategic goal here?
βDonβt care. Doesnβt matter. Look at how cool these explosions are! Weβre totally winning.β
I get get over the wrongness of these edits. It somehow combines the idea that war is good in and of itself because it is masculine and not woke, with a mode of engagement where nothing is actually real except images. That it is cringe to take anything seriously.
Every single day I think to myself βI should read Baudrillard if I want to understand the current eraβ And every single day I donβt do it.
War. Just like in the movies!
Not really a critique of the paper, which makes an important (depressing) point and seems to illustrate it well. Just a knee-jerk reaction. When I saw something about "stability" and a simulation approach I sort of expected something more "emergent". Had not come across corridor of stability before.
Not a fan of terms like "stabilize" for this purpose. To me that conjures up an image of something more dynamic/mechanistic than what is going on. Stable as opposed to chaotic. Not just diminishing returns from additional N. #stats
Was this an elaborate set-up with this goal in mind all along?
Tack tack :D
Pooh meme: bored, I don't know anything about this... smug: this is beyond the scope of the paper
editing some writing atm...
I think we should spend out which hills we die on.
We canβt all die on the same hill.
Coincidentally, my second paper ever (where Iβm first author) was just published yesterday (!) and relates to the difficulties with pre-post differences.
bsky.app/profile/vilg...
I guess it's "assuming nothing would have happened if not for the treatment, this is how much the treatment affected stuff". At least that's the null model that the p-values are based on here, right? Quite a strong assumption in most cases.
(in some cases)
Valid? No. The only information we have available? Yes.
Co-authors on bluesky: @viktorkaldo.bsky.social @erikforsell.bsky.social
(Some may say that any estimate of a pre-post change is a completely meaningless number anyway. I don't agree. often it's the best thing we got and interpretability can be better or worse.)
While we believe that waiting time is affected a lot by factors outside patients control, any causal interpretation of the time-dependent effect is troubled by a bunch of potential confounders.
However, I think the paper's strength is in it's descriptive side. This is what we see when we look! :)
Since we don't know a lot here, it seems important to report the context in which measurements happen carefully. Also, have a separate "pre" measure - never use a measure that determines inclusion/eligibility as your "pre" measure. //
But differences between screening and pre can depend on a bunch of stuff that's unrelated to time: Reactions to the assessment visit, relief at starting treatment, symptom exaggeration, measurement-error induced regression towards the mean (etc etc). We don't know! //
This has some important methodological implications when reporting within-group effects (e.g. in effectiveness studies). One might think that if the "screening" happens very close in time to treatment start, it "counts" as a pre-treatment measure, symptoms haven't had any time to change after all.//
We found the expected effect for depressive symptoms! But not for the other disorders we looked at. However, the main finding is that while there was a total drop in symptoms, the time-dependent part of it is tiny. Instead most of the drop appears "immediately" even for patients that barely wait. //
A prediction from this is that, if this symptom fluctuations has some inertia (which I think is plausible), patients that wait longer will have had more time to regress to their "as bad as usual" level. We used the fact that waiting times vary at our ICBT clinic to investigate this. //
π I just published my second paper! Woo!
In psychotherapy trials we often see that symptoms reduce between screening and start of treatment. A plausible idea about that is that patients self-refer when their fluctuating symptoms are extra bad. We checked! (we tried to check) //
Maybe there's a pattern here? dynomight.net/pattern/
Anecdotally, medical doctor-researchers from an older generation often have wildly unrealistic ideas about what sort of stuff you can do with AI (as a replacement for ordinary statistical models).
Interesting!
Still I think it works well pedagogically as an example of an analytical goal (estimating a proportion with a kinda small sample) where having at least some prior is bound to be reasonable.
Screenshot of the relevant part:
I wrote about this a bit in my book review of @tomchivers.bsky.social popular science book on Bayes.
unconfusion.substack.com/p/book-revie...
I'm probably unnecessarily charitable to this paper that I only read the title of, but I really think a chunk of Bayesian vs Frequentist disagreements comes down to fundamentally different ideas about what a research paper should even do in the first place.