Drawing from anthropology and science technology studies, the paper diagnoses the conditions that are hostile toward accountability in ML.
Drawing from anthropology and science technology studies, the paper diagnoses the conditions that are hostile toward accountability in ML.
New conceptual paper (w/Dana Calacci and Cindy Lin) at #ACM #AAAI #AIES2025 on why thereβs no accountability for social claims in so-called βgeneral-purposeβ Machine Learning research. I will present the paper on the 21st on a gargantuan poster;)
Paper: lnkd.in/ePn8USDx
My website: koutianqi.info
Technical debt, generalizability debt, no one talking about boycott debt.
My home institution, Penn State, is on the precipice of having a grad student union! I am so proud of all the work weβve done.
Yeah! Like a βrepositoryβ for generic social claims they make and what βcommonβ evidence do they need to gather to back them up;)
This week, Cristian Larroulet Philipi joins us to talk about measurement in the human sciences: why it can be more philosophically complex than in the physical sciences, and how it raises pressing questions about the role of numbers in psychology, social science, and policy
#philsci #measurement
Hahaha so right, Iβve corrected it!
thank u!
Screenshot of first page of commentary.
After one too many conversations about the ways GenAI + peer review = shitshow for all involved, I dashed off this slightly polemic commentary on how I think we should talk about GenAI as an epistemic carcinogen.
link.springer.com/article/10.1...
5/ We call for two collaborative research agendas:
1οΈβ£ A repository of social claim types + evidence.
2οΈβ£ Tools to map each claim in an ML paper to its supporting evidence.
4/ These resistances uphold a narrow form of epistemic authority - one that is misaligned with modern-day MLβs diverse goals - and deepen structural inequities between techno-elites & other communities.
π‘ Social claims should be treated like knowledge claims - explicit, with evidence for scrutiny.
3/ Applying this lens, we diagnose that the gap is sustained by:
π§ Cognitive resistance - epistemic assumptions that excuse ML researchers from justifying social claims.
ποΈ Structural resistance - norms & incentives (computational capture) prioritizing benchmarks over social relevance.
2/ We identify the symptom: no effective accountability mechanisms for this gap.
π We coin dead zone of accountability - aspects of the ML ecosystem (methods, norms, metrics, power structures) that resist critical scrutiny & meaningful accountability.
1/ π¨ Our paper (w/ @dana.witchy.business & Cindy Lin) is accepted to AAAI/ACM #AIES2025! π· arxiv.org/pdf/2508.08739
β οΈ ML papers make broad/implicit social claims - but these often donβt match reality. We call this mismatch the claim-reality gap.
The cover of the report, with dark blue title "How Big Cloud becomes Bigger" with "Bigger" in bold, and subtitle in light blue "Scrutinizing Google Microsoft and Amazon's investments", and light grey author line "David Gray Widder and Nathan Kim" on light grey page background with three clouds: white, dark grey, and very dark grey getting bigger and going towards the top right until they're off the page.
π£π¨NEW: βοΈ Big CloudβGoogle, Microsoft & Amazonβcontrol two thirds of the cloud compute market. Theyβre getting rich off the AI gold rush.
In new work with @nathanckim.bsky.social, we show how Big Cloud is expanding their empire by scrutinizing their *investments*β¦ π§΅
πPDF: dx.doi.org/10.2139/ssrn...
Already in town in LA for PLSC 2025!!!! Friends hit me up for coffee dinner or city walk;)!!!
Excited to announce my first peer-reviewed paper 'The Problem of Context Revisited' is out now!
Open access in the excellent 'Studies in History and Philosophy of Science', EIC @rachelankeny.bsky.social
Love any feedback, cheers
#philsci #histsci #sts #hps
www.sciencedirect.com/science/arti...
Reasonable to ask if we are now oppressing the Russian opposition at Putinβs behest.
theins.press/en/news/280037
Some news many years in the making: My book EMPIRE OF AI, out May 20, is ready for pre-order at empireofai.com. It tells the inside story of OpenAI as a lens for understanding the moment weβre in: the tech elite's extraordinary seizure of power and its threat to democracy. 1/
January update from the normative philosophy of computing newsletter: new CFPs, papers, workshops, and resources for philosophers working on normative questions raised by AI and computing.
Happy new year Seth!
In his last blog Felix Hill talked about he cannot get away from AI wherever he is, yet so many online eulogies still mourn how good at it he was. This field really is so fβed up. Iβll be angry writing my facct submission tmr.
Mary Shelley's novel suggests not only that magic and alchemy preceded science but also that science can infuse and revive their prescientific ambitions.
But it seems like the field is about to get an epistemic epiphany for opportunistic reasons
The blogger is targeting AI PhD graduates that prefers industry research positions. Given that confining industry AI scientistsβ research effort within product started happening years ago, this should not come as a surprise.
Linked blog is on the current AI bubble - AI PhDsβs skills are in much lower demand than 10 years ago. The blogger very poignantly implied that innovation stagnation led to this. βScienceβ is replaced with rote training that BS/MS can take on with less compensation and more workable ego.