This is the third story I've read in a month about how AI chatbots are leading people into psychological crises.
Gift link
This is the third story I've read in a month about how AI chatbots are leading people into psychological crises.
Gift link
I don’t really have the energy for politics right now. So I will observe without comment:
Executive Order 14110 was revoked (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence)
We focus on automatically evaluating contextual informativeness relative to multiple target words in child-directed text, with implications for improving the automatic generation of educational stories for early childhood vocabulary intervention. Can’t wait to share, and learn about others’ work :)
Excited to be presenting my work with @teaywright.bsky.social at #COLING2025 next week in Abu Dhabi! Find us in poster session 6/E on Jan 22nd (11 AM in the atrium).
Paper: arxiv.org/abs/2412.17427
1. Can you stop companies from training generative AI using your data? No, not currently.
2. Is this dataset meant for training generative AI? 🤷♀️ but more likely for research and statistical analysis.
3. Is it ok to duplicate and distribute people’s data without agency to opt out? I’d argue no.
So many people, CS researchers included, think that you can explore how an LLM works by simply asking it to tell you what it is doing or "thinking".
Here @jennhu.bsky.social provides an excellent illustration of how that approach fails even at the most basic level.