Tianqi Kou's Avatar

Tianqi Kou

@koutianqi

phd candidate at penn state ist, feminist science studies, ai ethics. 🌈he/him. perpetually busy reading things I don’t understand. koutianqi.info

137
Followers
246
Following
19
Posts
17.12.2024
Joined
Posts Following

Latest posts by Tianqi Kou @koutianqi

Drawing from anthropology and science technology studies, the paper diagnoses the conditions that are hostile toward accountability in ML.

19.10.2025 18:39 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

New conceptual paper (w/Dana Calacci and Cindy Lin) at #ACM #AAAI #AIES2025 on why there’s no accountability for social claims in so-called β€œgeneral-purpose” Machine Learning research. I will present the paper on the 21st on a gargantuan poster;)

Paper: lnkd.in/ePn8USDx
My website: koutianqi.info

19.10.2025 18:39 πŸ‘ 9 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Technical debt, generalizability debt, no one talking about boycott debt.

14.10.2025 23:21 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

My home institution, Penn State, is on the precipice of having a grad student union! I am so proud of all the work we’ve done.

02.10.2025 15:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yeah! Like a β€œrepository” for generic social claims they make and what β€œcommon” evidence do they need to gather to back them up;)

17.09.2025 01:39 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
S5 E5 - Cristian Larroulet Philippi on Measurement in the Human Sciences The HPS Podcast - Conversations from History, Philosophy and Social Studies of Science Β· Episode

This week, Cristian Larroulet Philipi joins us to talk about measurement in the human sciences: why it can be more philosophically complex than in the physical sciences, and how it raises pressing questions about the role of numbers in psychology, social science, and policy

#philsci #measurement

14.08.2025 15:03 πŸ‘ 58 πŸ” 19 πŸ’¬ 6 πŸ“Œ 4

Hahaha so right, I’ve corrected it!

14.08.2025 13:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

thank u!

13.08.2025 13:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Screenshot of first page of commentary.

Screenshot of first page of commentary.

After one too many conversations about the ways GenAI + peer review = shitshow for all involved, I dashed off this slightly polemic commentary on how I think we should talk about GenAI as an epistemic carcinogen.

link.springer.com/article/10.1...

13.08.2025 05:15 πŸ‘ 71 πŸ” 23 πŸ’¬ 3 πŸ“Œ 7

5/ We call for two collaborative research agendas:
1️⃣ A repository of social claim types + evidence.
2️⃣ Tools to map each claim in an ML paper to its supporting evidence.

13.08.2025 02:32 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

4/ These resistances uphold a narrow form of epistemic authority - one that is misaligned with modern-day ML’s diverse goals - and deepen structural inequities between techno-elites & other communities.
πŸ’‘ Social claims should be treated like knowledge claims - explicit, with evidence for scrutiny.

13.08.2025 02:32 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3/ Applying this lens, we diagnose that the gap is sustained by:
🧠 Cognitive resistance - epistemic assumptions that excuse ML researchers from justifying social claims.
πŸ—οΈ Structural resistance - norms & incentives (computational capture) prioritizing benchmarks over social relevance.

13.08.2025 02:32 πŸ‘ 7 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/ We identify the symptom: no effective accountability mechanisms for this gap.
πŸ†• We coin dead zone of accountability - aspects of the ML ecosystem (methods, norms, metrics, power structures) that resist critical scrutiny & meaningful accountability.

13.08.2025 02:32 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1/ 🚨 Our paper (w/ @dana.witchy.business & Cindy Lin) is accepted to AAAI/ACM #AIES2025! πŸ“· arxiv.org/pdf/2508.08739
⚠️ ML papers make broad/implicit social claims - but these often don’t match reality. We call this mismatch the claim-reality gap.

13.08.2025 02:32 πŸ‘ 18 πŸ” 6 πŸ’¬ 3 πŸ“Œ 1
The cover of the report, with dark blue title "How Big Cloud becomes Bigger" with "Bigger" in bold, and subtitle in light blue "Scrutinizing Google Microsoft and Amazon's investments", and light grey author line "David Gray Widder and Nathan Kim" on light grey page background with three clouds: white, dark grey, and very dark grey getting bigger and going towards the top right until they're off the page.

The cover of the report, with dark blue title "How Big Cloud becomes Bigger" with "Bigger" in bold, and subtitle in light blue "Scrutinizing Google Microsoft and Amazon's investments", and light grey author line "David Gray Widder and Nathan Kim" on light grey page background with three clouds: white, dark grey, and very dark grey getting bigger and going towards the top right until they're off the page.

πŸ“£πŸš¨NEW: ☁️ Big Cloudβ€”Google, Microsoft & Amazonβ€”control two thirds of the cloud compute market. They’re getting rich off the AI gold rush.

In new work with @nathanckim.bsky.social, we show how Big Cloud is expanding their empire by scrutinizing their *investments*… 🧡

πŸ“„PDF: dx.doi.org/10.2139/ssrn...

06.08.2025 14:39 πŸ‘ 77 πŸ” 37 πŸ’¬ 1 πŸ“Œ 21

Already in town in LA for PLSC 2025!!!! Friends hit me up for coffee dinner or city walk;)!!!

27.05.2025 23:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The problem of context revisited: Moving beyond the resources model The problem of context, which explores relations between societal conditions and science, has a long and contentious tradition in the history, philoso…

Excited to announce my first peer-reviewed paper 'The Problem of Context Revisited' is out now!

Open access in the excellent 'Studies in History and Philosophy of Science', EIC @rachelankeny.bsky.social

Love any feedback, cheers

#philsci #histsci #sts #hps

www.sciencedirect.com/science/arti...

04.06.2024 00:15 πŸ‘ 42 πŸ” 6 πŸ’¬ 2 πŸ“Œ 2
Preview
Russian scientist from Harvard Medical School detained in U.S., faces deportation and likely arrest upon return due to anti-war stance A Russian scientist working at Harvard Medical School has been detained in the United States and placed in immigration detention. According to multiple independent Russian media outlets and the scient...

Reasonable to ask if we are now oppressing the Russian opposition at Putin’s behest.
theins.press/en/news/280037

28.03.2025 01:27 πŸ‘ 11724 πŸ” 4765 πŸ’¬ 521 πŸ“Œ 301
Post image

Some news many years in the making: My book EMPIRE OF AI, out May 20, is ready for pre-order at empireofai.com. It tells the inside story of OpenAI as a lens for understanding the moment we’re in: the tech elite's extraordinary seizure of power and its threat to democracy. 1/

26.03.2025 11:04 πŸ‘ 675 πŸ” 214 πŸ’¬ 27 πŸ“Œ 51
Preview
Normative Philosophy of Computing - January Happy New Year!

January update from the normative philosophy of computing newsletter: new CFPs, papers, workshops, and resources for philosophers working on normative questions raised by AI and computing.

16.01.2025 06:48 πŸ‘ 16 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0

Happy new year Seth!

16.01.2025 07:22 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

In his last blog Felix Hill talked about he cannot get away from AI wherever he is, yet so many online eulogies still mourn how good at it he was. This field really is so f’ed up. I’ll be angry writing my facct submission tmr.

05.01.2025 05:44 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Victor Frankenstein’s Technoscientific Dream of Reason How is it that this premodern mystical alchemist appears so contemporary today?

Mary Shelley's novel suggests not only that magic and alchemy preceded science but also that science can infuse and revive their prescientific ambitions.

29.12.2024 19:46 πŸ‘ 25 πŸ” 8 πŸ’¬ 2 πŸ“Œ 1

But it seems like the field is about to get an epistemic epiphany for opportunistic reasons

23.12.2024 19:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The blogger is targeting AI PhD graduates that prefers industry research positions. Given that confining industry AI scientists’ research effort within product started happening years ago, this should not come as a surprise.

23.12.2024 19:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
i sensed anxiety and frustration at NeurIPS’24 – Kyunghyun Cho

kyunghyuncho.me/i-sensed-anx...

23.12.2024 19:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Linked blog is on the current AI bubble - AI PhDs’s skills are in much lower demand than 10 years ago. The blogger very poignantly implied that innovation stagnation led to this. β€œScience” is replaced with rote training that BS/MS can take on with less compensation and more workable ego.

23.12.2024 19:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0