sjdm-tweets's Avatar

sjdm-tweets

@sjdm-tweets

An official SJDM account. Tag your JDM announcements, conferences, and jobs with @sjdm-tweets.bsky.social Questions? contact @dggoldst.bsky.social

723
Followers
1
Following
64
Posts
16.10.2024
Joined
Posts Following

Latest posts by sjdm-tweets @sjdm-tweets

What we love about this paper is how it digs down into the processes that enable and constrain desirability biases and wishful thinking. Preregistration and open practices enhance credibility.

08.03.2026 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Authors’ Bluesky handles we could find:
@jdstrueder.bsky.social @paulwind.bsky.social

08.03.2026 19:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Today's SJDM Featured Paper is: Strueder, J. D., Looi, T., Clark, P. M., Cockburn, J., & Windschitl, P. D. (2026). Optimistic Predictions Under Uncertainty: Active Information Search Both Supports and Constrains Motivated Bias [Preprint]. PsyArXiv. osf.io/preprints/ps...

08.03.2026 19:17 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Which risky choices should (and do) we study as behavioral scientists? In two related articles, we recently examined the current "ecology of risk" (i.e., laypersons' reports on what risky choices they face in real life) and how these choices differ from those […]

[Original post on mstdn.science]

02.03.2026 13:40 πŸ‘ 4 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1

What we love about this paper is how it attempts to reconcile conflicting findings on whether groups come to more or less accurate judgments than do individuals. Preregistration and open practices enhance credibility.

06.03.2026 13:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Authors’ Bluesky handles we could find:
@joshuabecker.bsky.social

Please reply with more handles if you can find them!

06.03.2026 13:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Network Structures of Collective Intelligence: The Contingent Benefits of Group Discussion Research on belief formation has produced contradictory findings on whether and when communication between group members will improve the accuracy of numeric estimates such as economic forecasts, medi...

Today's SJDM Featured Paper is: Becker, J., Almaatouq, A., & HorvÑt, E.-Á. (2021). Network Structures of Collective Intelligence: The Contingent Benefits of Group Discussion. arXiv. doi.org/10.48550/arX...

06.03.2026 13:32 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

What we love about this paper is how it helps us understand the role of AI in promoting and debunking misinformation. Preregistration and open practices enhance credibility.

06.03.2026 13:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Authors’ Bluesky handles we could find:
@tomcostello.bsky.social @kellinpelrine.bsky.social @matthewkowal.bsky.social @arechar.bsky.social @godbout.bsky.social @gleave.me @dgrand.bsky.social @gordpennycook.bsky.social

05.03.2026 13:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Large language models can effectively convince people to believe conspiracies Large language models (LLMs) have been shown to be persuasive across a variety of contexts. But it remains unclear whether this persuasive power advantages truth over falsehood, or if LLMs can promote...

Today's SJDM Featured Paper is:

Costello, T. H., Pelrine, K., Kowal, M., Arechar, A. A., Godbout, J.-F., Gleave, A., Rand, D., & Pennycook, G. (2026). Large language models can effectively convince people to believe conspiracies. arXiv. doi.org/10.48550/arX...

05.03.2026 13:58 πŸ‘ 7 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

Today's SJDM Featured Paper is:
Hagmann, D., Sajons, G. B., & Tinsley, C. H. (in press). Base rate neglect as a source of inaccurate statistical discrimination. Management Science. files.dhagmann.com/papers/2025_...

12.01.2026 17:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Authors we could find on here. @stavatir.bsky.social @daviddunning6.bsky.social

10.01.2026 17:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Today's SJDM Featured Paper is: Atir S. & Dunning D. A. (in press). Learning more than you can know: Introductory education produces overly expansive self-assessments of knowledge. Management Science.

tinyurl.com/AtirDunning-...

10.01.2026 17:49 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 2

Authors we could find on here. @mikedekay.bsky.social

09.01.2026 19:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
OSF

Today's SJDM Featured Paper is: DeKay, M. L. (in press). Risky-choice framing effects persist when option descriptions are matched and complete: A replication and extension of DeKay and Dou (2024). Psychonomic Bulletin & Review. osf.io/preprints/ps...

09.01.2026 19:20 πŸ‘ 4 πŸ” 2 πŸ’¬ 2 πŸ“Œ 1

Today's SJDM Featured Paper is: Dietvorst, B. J. (in press). Understanding people's preferences for predictions: People prioritize being right over minimizing how wrong they are in expectation. Management Science. doi.org/10.1287/mnsc...

08.01.2026 13:32 πŸ‘ 6 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1

Authors we could find on here. Please reply with more handles if you can find them!

@moritzingendahl.bsky.social
@hansalves.bsky.social

07.01.2026 22:45 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

Today's SJDM Featured Paper is: Vaz, A., Ingendahl, M., Mata, A., & Alves, H. (2025). "Stop the Count!" – How reporting partial election results fuels beliefs in election fraud. Psychological Science, 36(8), 676-688. doi.org/10.1177/0956...

07.01.2026 22:45 πŸ‘ 3 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Individual differences in overconfidence: A new measurement approach | Judgment and Decision Making | Cambridge Core Individual differences in overconfidence: A new measurement approach - Volume 19

The ad collab emerged from a discussion with Don at @sjdm-tweets.bsky.social about this paper: www.cambridge.org/core/journal...

In the paper, Jabin & I try to address a longstanding problem for measuring overconfidence: One's level of overconfidence is highly dependent on the task in question.

17.12.2025 17:17 πŸ‘ 0 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Post image

The Society for Judgment and Decision Making is pleased to announce that the latest newsletter is ready for download:

sjdm.org/newsletters/

This issue contains announcements, conferences, and jobs!

31.12.2025 01:38 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Congratulations to Hengchen Dai who is the winner of the 2026 Early Career Impact Award from the Federation of Associations in Behavioral and Brain Sciences!

fabbs.org/about/early-...

17.12.2025 11:13 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
The template's text: 

I object to providing my free labor to publishers that rely on the volunteer efforts of authors, reviewers, and editors, claim copyright over our work, restrict the dissemination of scientific knowledge, and then charge our own libraries for access to it.

Although I appreciate the valuable role that journal publishers once played, the Internet age has largely eliminated the value of distributing paper journals. I wish I could publish all my work in open-access venues. Yet, perhaps like you, I find myself constrained by the current system while at the same time envisioning a better future. One way to help facilitate change is if individual editors or whole editorial teams move to open-access versions of old-fashioned paper journals. In case you or your editorial team has ever considered such a move, allow me to offer my enthusiastic support.

With apologies for being difficult,
Don Moore

The template's text: I object to providing my free labor to publishers that rely on the volunteer efforts of authors, reviewers, and editors, claim copyright over our work, restrict the dissemination of scientific knowledge, and then charge our own libraries for access to it. Although I appreciate the valuable role that journal publishers once played, the Internet age has largely eliminated the value of distributing paper journals. I wish I could publish all my work in open-access venues. Yet, perhaps like you, I find myself constrained by the current system while at the same time envisioning a better future. One way to help facilitate change is if individual editors or whole editorial teams move to open-access versions of old-fashioned paper journals. In case you or your editorial team has ever considered such a move, allow me to offer my enthusiastic support. With apologies for being difficult, Don Moore

The Drake no/yes meme, except Drake is replaced with an anthropomorphized mouse (a la Stuart little).

This AI-generated mouse appeared throughout the slides as a personification of various points about the challenges scientists face, including the increasing politicization of science funding. The mouse first appeared early in the address when Don reminded us of the "transgender mice" remark from the 47th POTUS.

The Drake no/yes meme, except Drake is replaced with an anthropomorphized mouse (a la Stuart little). This AI-generated mouse appeared throughout the slides as a personification of various points about the challenges scientists face, including the increasing politicization of science funding. The mouse first appeared early in the address when Don reminded us of the "transgender mice" remark from the 47th POTUS.

The QR code for Don Moore's review declination template on the final slide.

The Q&A included pushback:

Danny Oppenheimer: Would more open publishing allow more misinformation by allowing more junk science to be posted along good science? 

- Reply: the status quo is preprints and science reporting is not good. Hard to know if openness actually makes things much worse.


(I didn't see who asked): How else will early career people get hired and promoted if not by prestigious publications? Wasn't Don's early work on overconfidence made famous by a paywalled, profit-seeking Psychology journal article?



- Reply: Don still feels uncomfortable about that paper’s home. And there are better forms of prestige, such as the prestige one gets for prioritizing rigorous open science, changing one’s mind in light of better evidence, etc.

The QR code for Don Moore's review declination template on the final slide. The Q&A included pushback: 
Danny Oppenheimer: Would more open publishing allow more misinformation by allowing more junk science to be posted along good science? - Reply: the status quo is preprints and science reporting is not good. Hard to know if openness actually makes things much worse.
 (I didn't see who asked): How else will early career people get hired and promoted if not by prestigious publications? Wasn't Don's early work on overconfidence made famous by a paywalled, profit-seeking Psychology journal article?

 - Reply: Don still feels uncomfortable about that paper’s home. And there are better forms of prestige, such as the prestige one gets for prioritizing rigorous open science, changing one’s mind in light of better evidence, etc.

Don also shared his template for declining review requests from journals that try to profit from our taxpayer-funded and volunteer #science:

learnmoore.org/hotfresh.html

That URL also links to a form to share your #HotFresh preprints, data, reviews, etc. to the @sjdm-tweets.bsky.social newsletter.

23.11.2025 19:40 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

If you want to submit your research to HotFresh Research News or find a boilerplate letter to refuse reviewing for closed journals, visit this link

learnmoore.org/hotfresh.html

23.11.2025 18:38 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

Don Moore kicking off the 2025 Presidential Address at the Society for Judgment and Decision Making conference

23.11.2025 18:15 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

For Sunday at the SJDM conference, please note the following changes to the program!

23.11.2025 18:04 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Experimental conditions:
- No facilitation
- Human facilitation
- LLM facilitation

Experimental conditions: - No facilitation - Human facilitation - LLM facilitation

Human and LLM facilitators receive the same instructions

1. Guiding message
"People may have different information about what is being discussed in this meeting, so encourage everyone to share all of the relevant information they have.'

2. Information about the facilitator role
"As a facilitator for this meeting, your specific role is help the group make a decision by, first, making sure that everyone is heard from and shares what they know and, second, acting as a scoreboard and keeping track of pros and cons.*

Human and LLM facilitators receive the same instructions 1. Guiding message "People may have different information about what is being discussed in this meeting, so encourage everyone to share all of the relevant information they have.' 2. Information about the facilitator role "As a facilitator for this meeting, your specific role is help the group make a decision by, first, making sure that everyone is heard from and shares what they know and, second, acting as a scoreboard and keeping track of pros and cons.*

Forecasted and actual decisions were similar across conditions.

Forecasted and actual decisions were similar across conditions.

See Figure 7 from the preprint (URL/DOI in the post).

Survey question: "You completed this task with a [human, Al, none] facilitator. If you were to repeat this task, which would you prefer?"

"Users seem to prefer what they know, whatever it may be"

"Experiencing LLM facilitation seems to yield interest in both facilitators, in a way that human facilitation doesn't."

See Figure 7 from the preprint (URL/DOI in the post). Survey question: "You completed this task with a [human, Al, none] facilitator. If you were to repeat this task, which would you prefer?" "Users seem to prefer what they know, whatever it may be" "Experiencing LLM facilitation seems to yield interest in both facilitators, in a way that human facilitation doesn't."

Before #MSFT offered "Facilitator" in Teams meetings, @dggoldst.bsky.social et al. developed something similar.

People preferred their #statusQuo, but #AI facilitation
- made people more open to both human and #LLM facilitation.
- didn't seem to impact decisions

More via doi.org/10.48550/arX...

23.11.2025 00:57 πŸ‘ 2 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
People Who Agreed on a Position Varied in Their Reasons

Position: "We should increase restrictions on foreign ownership of U.S. land and businesses."

Reasons:
- Limits foreign influence 22%
- Protects national sovereignty 30%
- Preserves economic control 25%
- Prevents land exploitation and misuse 20%
- Others 3%

People Who Agreed on a Position Varied in Their Reasons Position: "We should increase restrictions on foreign ownership of U.S. land and businesses." Reasons: - Limits foreign influence 22% - Protects national sovereignty 30% - Preserves economic control 25% - Prevents land exploitation and misuse 20% - Others 3%

Study 1: Do We Ask Why When We Agree?

Plot of the percentage of people that asked why (y-axis) by agreement (x-axis):

When partner agreed: 25%
When partners disagreed: 70%

Study 1: Do We Ask Why When We Agree? Plot of the percentage of people that asked why (y-axis) by agreement (x-axis): When partner agreed: 25% When partners disagreed: 70%

Study 3: The Learning Loss Incurred by Not
Asking Why When We Agree

Study 3: The Learning Loss Incurred by Not Asking Why When We Agree

When People Agreed, They Were Worse at Predicting Their Counterpart's Future Preference: 48% versus 37% (p = 0.023).

When People Agreed, They Were Worse at Predicting Their Counterpart's Future Preference: 48% versus 37% (p = 0.023).

Agreement can feel nice, but it may discourage reflection and understanding.

Agreeing with someone about a policy (vs disagreeing) predicted lower
- odds of asking, "Why?"

- accuracy in predicting their preferences

Follow Zhiyang (Bella) et al for the pub alert: scholar.google.com/citations?us...

23.11.2025 00:12 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
How Al may erode social signals in writing
Three qualities make an observable signal to be perceived as credible:

1. Costly
Not anymore. Generating content has become trivially easy, and requires less energy: 130-1500 times less CO2/page of text than human writers (Tomlinson et al., 2024)

2. Hard to fake
Not anymore. Generative Al is remarkably effective at creating personalized, context-dependent content

3. Verifiable
Not anymore. We generally lack the tools to reliably detect Al-generated content (KΓΆbis & Mossink, 2021; Kreps et al.,
2022; Sadasivan et al., 2023; Tang et al., 2023; Jakesch et al.,
2023; Gao et al., 2024; Porter & Machery, 2024)

How Al may erode social signals in writing Three qualities make an observable signal to be perceived as credible: 1. Costly Not anymore. Generating content has become trivially easy, and requires less energy: 130-1500 times less CO2/page of text than human writers (Tomlinson et al., 2024) 2. Hard to fake Not anymore. Generative Al is remarkably effective at creating personalized, context-dependent content 3. Verifiable Not anymore. We generally lack the tools to reliably detect Al-generated content (KΓΆbis & Mossink, 2021; Kreps et al., 2022; Sadasivan et al., 2023; Tang et al., 2023; Jakesch et al., 2023; Gao et al., 2024; Porter & Machery, 2024)

Uncertainty & Limited Attention

What if audiences are aware of potential Al use but remain uncertain about actual Al use? (a classic signaling problem)

What if they don't even think about it? (i.e., default judgments)

Attentional constraints, bounded rationality (Simon, 1955; Slovic, 1972)
- "What you see is all there is": Decision problems are constrained by their presentation and what is most salient in our minds (Kahneman, 2011; Enke, 2020)

Uncertainty & Limited Attention What if audiences are aware of potential Al use but remain uncertain about actual Al use? (a classic signaling problem) What if they don't even think about it? (i.e., default judgments) Attentional constraints, bounded rationality (Simon, 1955; Slovic, 1972) - "What you see is all there is": Decision problems are constrained by their presentation and what is most salient in our minds (Kahneman, 2011; Enke, 2020)

Does participants' own use of genAl moderate these effects?

"Al disclosure penalty" (human vs. Al): YES

Default judgments (human vs. no info): NO

The slide shows a plot showing "overall social impression" (y-axis) by message source (x-axis: no info, human-written, uncertain, and AI-generated). The AI-generated source cue had a huge negative impact (d > 1, p < 0.001).

Does participants' own use of genAl moderate these effects? "Al disclosure penalty" (human vs. Al): YES Default judgments (human vs. no info): NO The slide shows a plot showing "overall social impression" (y-axis) by message source (x-axis: no info, human-written, uncertain, and AI-generated). The AI-generated source cue had a huge negative impact (d > 1, p < 0.001).

Post image

How do people think about using #AI for #writing?

Andras Molnar and Jiaqi Zhu found people didn’t suspect #LLM use without cues or reminders, which DID hinder impressions (2 experiments, N = 1301, 8 contexts).

Follow for the pub alert: www.researchgate.net/profile/Jiaq...

#jobMarket #socialPsych

22.11.2025 23:24 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Can open-ended responses, analyzed with LLMs, help overcome these challenges?
β†’ Open-ended format does not bring certain information to mind. participants generate the content they consider most important
β†’ This may help minimize order effects and increase predictive power
β†’ A single open-ended question can contain multiple psychological dimensions
β†’ LLMs can extract latent structure at scale

(e.g., protocol analyses as discussed by Ericsson & Simon, 1993)

Can open-ended responses, analyzed with LLMs, help overcome these challenges? β†’ Open-ended format does not bring certain information to mind. participants generate the content they consider most important β†’ This may help minimize order effects and increase predictive power β†’ A single open-ended question can contain multiple psychological dimensions β†’ LLMs can extract latent structure at scale (e.g., protocol analyses as discussed by Ericsson & Simon, 1993)

From a within-subject design:
- corr(global life satisfaction rating, GPT inferred life satisfaction) = 0.76
- corr(global life satisfaction rating, relationship satisfaction rating) = 0.46
- corr(global life satisfaction rating, GPT-inferred relationship satisfaction) = 0.58

From a within-subject design: - corr(global life satisfaction rating, GPT inferred life satisfaction) = 0.76 - corr(global life satisfaction rating, relationship satisfaction rating) = 0.46 - corr(global life satisfaction rating, GPT-inferred relationship satisfaction) = 0.58

From a between-subject design:
- corr(global life satisfaction rating, GPT inferred life satisfaction) = 0.63
- corr(global life satisfaction rating, GPT-inferred relationship satisfaction) = 0.52

From a between-subject design: - corr(global life satisfaction rating, GPT inferred life satisfaction) = 0.63 - corr(global life satisfaction rating, GPT-inferred relationship satisfaction) = 0.52

Other domains show similar effects:

β†’ Rating life satisfaction and 8 dimensions of life satisfaction (within subjects design with Stanford students)
- Largest effect for: health (r = 0.60 vs r = 0.77) and academic life (r = 0.57 vs. r = 0.74)

β†’ Future outlook of Al (between subjects design on Prolific)
- Largest effect for: inequality concerns (r= 0.43 vs. r = 0.76)

β†’ Smartphone satisfaction (between subjects design on Prolific)
- Largest effect for price (r= 0.30 vs. r = 0.64)
- No difference for battery concerns

Other domains show similar effects: β†’ Rating life satisfaction and 8 dimensions of life satisfaction (within subjects design with Stanford students) - Largest effect for: health (r = 0.60 vs r = 0.77) and academic life (r = 0.57 vs. r = 0.74) β†’ Future outlook of Al (between subjects design on Prolific) - Largest effect for: inequality concerns (r= 0.43 vs. r = 0.76) β†’ Smartphone satisfaction (between subjects design on Prolific) - Largest effect for price (r= 0.30 vs. r = 0.64) - No difference for battery concerns

Rating scales are vulnerable to biases that reduce predictive power.

Can #languageModels do better by analyzing open-ended responses?

@adaaka.bsky.social found such #LLM measures were more predictive and less vulnerable to #bias in 7 preregistered studies (N = 2326).

#psychometrics #tech #PhilSci

22.11.2025 22:23 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 4
Research question 2: Can large language models predict the perceptions people reported about the real-life decisions they verbalized?

Research question 2: Can large language models predict the perceptions people reported about the real-life decisions they verbalized?

An illustration of how a language model can "directly" infer a decision-makers self-reported perceptions based solely on the decision-makers verbalization of the decision (and not the self-reported perception).

An illustration of how a language model can "directly" infer a decision-makers self-reported perceptions based solely on the decision-makers verbalization of the decision (and not the self-reported perception).

The three methods used to have language models infer people's self-reported perceptions from the decision-makers think-aloud recording of the decision: direct, feature extraction, and embeddings.

The three methods used to have language models infer people's self-reported perceptions from the decision-makers think-aloud recording of the decision: direct, feature extraction, and embeddings.

Visualization of the correlations between model-predicted and self-reported perceptions (left to right) by method (top to bottom). Correlations range from 0.26 to 0.59.

Visualization of the correlations between model-predicted and self-reported perceptions (left to right) by method (top to bottom). Correlations range from 0.26 to 0.59.

Another way to see inside the black box of #decisionMaking involves recording people thinking aloud during decisions.

@aaronlob.bsky.social and @renatofrey.mstdn.science.ap.brid.gy had #AI predict people’s perceptions from their verbalizations (N = 178).

The correlations ranged from β‰…0.3 to β‰…0.6.

22.11.2025 21:51 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0