wendy norris's Avatar

wendy norris

@wendynorris

Assoc Prof | Information Science Ex-investigative reporter and editor Crisis informatics nerd Makes good trouble Tell your dog I said "hi"

1,468
Followers
2,445
Following
1,050
Posts
08.07.2023
Joined
Posts Following

Latest posts by wendy norris @wendynorris

Screenshot of ProPublica webpage titled “Explore Financial Disclosures From President Trump and 1,500 of His Appointees,” accompanied by this blurb: “Use this database to explore potential conflicts of interest for President Donald Trump and his team. The documents disclose positions officials have held outside government, their assets and their debts, among other things.” A search bar prompts the user to search for a person or financial holding.

Screenshot of ProPublica webpage titled “Explore Financial Disclosures From President Trump and 1,500 of His Appointees,” accompanied by this blurb: “Use this database to explore potential conflicts of interest for President Donald Trump and his team. The documents disclose positions officials have held outside government, their assets and their debts, among other things.” A search bar prompts the user to search for a person or financial holding.

THREAD: We just released a tool that allows you to search billions of dollars in wealth, ties to the most powerful companies in the country, and details of the personal finances of President Trump and over 1,500 of his appointees.

Here’s how it works 1/

05.03.2026 14:01 👍 659 🔁 347 💬 10 📌 38
Preview
Can AI Replace Social Science Researchers? No. No it can't. Come on, now.

New post: Can AI Replace Social Science Researchers? (No. No it can't. Come on, now.)

davekarpf.beehiiv.com/p/can-ai-rep...

05.03.2026 16:49 👍 440 🔁 120 💬 24 📌 37
Post image

New paper from team @aial.ie! aial.ie/research/gpa...

EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.

1/

05.03.2026 18:04 👍 114 🔁 69 💬 2 📌 3
Preview
A Rational Analysis of the Effects of Sycophantic AI People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue...

Added to my reading list:
arxiv.org/abs/2602.14270

03.03.2026 16:35 👍 5 🔁 2 💬 1 📌 0

You’re right. Thanks for the correction. I used the wrong pronoun. I meant “their” as in Google, Apple, NVIDIA, and other Anthropic partners who have defense contracts and don’t want Hegseth to delist them from DoD procurement as supply chain risks.

02.03.2026 13:15 👍 1 🔁 0 💬 0 📌 0
In 2026, colleges must teach students that this is not the end of the world. We must teach hope. Current undergraduates can barely remember a time before the threats of climate change and authoritarianism loomed to catastrophic scale. Since 2010, the future depicted in TV, books, and games has been dystopian or apocalyptic, so for our current students the end of the world feels more familiar and realistic than a future with hope. Now we are asking them to choose majors and life paths when the desirability, indeed the very existence, of whole sectors of employment are in question, due to the overwhelming promises of LLMs and machine learning. As young people hear daily that vocation after vocation may vanish into automation’s maw, and that democracy, liberty, land, sea, and sky are all in jeopardy, despair is growing. Despair is very emotionally tempting. It means freedom from the responsibility to shape the future. This is a terrifying turning point, but many generations before us have faced such turning points, and met them. We can offer our students perspective. Only a few dozen institutions on Earth are more than 900 years old, and the vast majority are universities. The university system is not a house of straw to buckle in this storm: We are the rocks that have sheltered the knowledge, hope, and truth through tumults which have toppled kingdoms while classrooms endured. We can endure this, and be a guiding light through it, but only by recentering, by teaching citizens, not workers; power, not PowerPoint; aspiration, not apocalypse. Despair is how we lose. The classroom is where we battle it. All other battles flow from here.

Ada Palmer is an associate professor of history at the University of Chicago.

In 2026, colleges must teach students that this is not the end of the world. We must teach hope. Current undergraduates can barely remember a time before the threats of climate change and authoritarianism loomed to catastrophic scale. Since 2010, the future depicted in TV, books, and games has been dystopian or apocalyptic, so for our current students the end of the world feels more familiar and realistic than a future with hope. Now we are asking them to choose majors and life paths when the desirability, indeed the very existence, of whole sectors of employment are in question, due to the overwhelming promises of LLMs and machine learning. As young people hear daily that vocation after vocation may vanish into automation’s maw, and that democracy, liberty, land, sea, and sky are all in jeopardy, despair is growing. Despair is very emotionally tempting. It means freedom from the responsibility to shape the future. This is a terrifying turning point, but many generations before us have faced such turning points, and met them. We can offer our students perspective. Only a few dozen institutions on Earth are more than 900 years old, and the vast majority are universities. The university system is not a house of straw to buckle in this storm: We are the rocks that have sheltered the knowledge, hope, and truth through tumults which have toppled kingdoms while classrooms endured. We can endure this, and be a guiding light through it, but only by recentering, by teaching citizens, not workers; power, not PowerPoint; aspiration, not apocalypse. Despair is how we lose. The classroom is where we battle it. All other battles flow from here. Ada Palmer is an associate professor of history at the University of Chicago.

This, from Ada Palmer as part of The Chronicle's survey of 11 scholars on the future of higher ed, is what I needed to end the week.

28.02.2026 00:54 👍 404 🔁 211 💬 4 📌 37

The government played Anthropic’s game better than Anthropic could; destroyed the plausible deniability Anthropic (and all these companies) depend on to maintain their “respectable” image while they create products that are designed for destruction.

02.03.2026 02:51 👍 76 🔁 32 💬 2 📌 0

An important through-line is that AI “intelligence” in military applications is used to justify horrors that humans already decided to do (bombing schools), after the fact or as a parallel construction; and the same thing is going to apply to how AI is used for juicing mass surveillance of Americans

01.03.2026 20:16 👍 51 🔁 24 💬 1 📌 0
Preview
Pokémon Go Players Were Duped Into Training a Powerful AI Map of the Real World While you thought you were training your Pikachu, you were actually training AI to see the world.

While you were chasing that Groagunk during COVID lockdown, Niantic was building a Large Geospatial Model with pedestrian-level geolocated imagery that is useful for pinpoint military targeting.

I didn't consent to my game play data being used for surveillance. Did you?

01.03.2026 20:19 👍 7 🔁 1 💬 1 📌 0

This is catastrophic.

NSF awards across computer/info sciences, behavioral sciences, and mathematics fund frontier research and essential critique of technologies, technical systems, and tech transfer.

Lack of funding means a lot of bad outcomes, including less attention to AI risks/harms.

01.03.2026 19:58 👍 2 🔁 0 💬 0 📌 0

I started seeing it circa 2010-ish.

Apple was eating everyone's lunch. Google and Yahoo were stalling out. App-driven social sharing (Uber, Waze, Github, Twitter, early network blogging like Reddit) was exploding.

People flocked to SV with stars in their eyes to cash out not to build/innovate.

01.03.2026 19:33 👍 7 🔁 2 💬 0 📌 0

A gentle reminder that contemporary neural networks are only an abstraction built upon Boolean logic

And that said neural networks are only an echo of a whisper of what organic neurons are.

01.03.2026 19:01 👍 101 🔁 18 💬 3 📌 5

“AI” was *always* a tool for surveillance

01.03.2026 17:12 👍 2 🔁 0 💬 0 📌 0

This is the real "AI will disrupt work" angle. They're going to lie about its capabilities, tank companies to buy cheap, increase precarity of work. When the bubble pops, they'll still be well placed to come out on top. Another cycle in the decades long upwards transfer of wealth.

01.03.2026 16:43 👍 59 🔁 29 💬 0 📌 0

Too many “middle ground” AI arguments—“I have concerns, too, but we have to adapt”—proceed from what is to me a peculiar embrace of “inevitability” which seems to be magical thinking, a way of depoliticizing the political, of self-soothing in the face of an overwhelming challenge.

01.03.2026 16:41 👍 416 🔁 102 💬 16 📌 15

Anthropic’s work also thoroughly bastardizes decades of interdisciplinary academic research that neither replicates nor supports their interpretations.

Are there insights? Sure.

But self-interested industry research is still self-interested industry research and should be viewed through that lens.

01.03.2026 16:56 👍 3 🔁 0 💬 0 📌 0

This gets dicey if someone actually believes what Anthropic researchers and execs say they believe about medium- and long-term AI, but the cult-ish side of AI safety is basically falling apart upon contact with reality.

01.03.2026 15:59 👍 15 🔁 1 💬 1 📌 0

I think the last time the USA created regime change solely through bombing was in Cambodia, when a massive carpet bombing campaign so destabilized the country that the Khmer Rouge came to power and wiped out the Royal government—the American ally. Genocide followed.

01.03.2026 07:18 👍 220 🔁 91 💬 6 📌 3
Preview
Wegmans in Buffalo surveilling shoppers, collecting reams of data Wegmans has made headlines for its use of facial recognition technology in NYC. But the business is collecting tons of other shopper data, too.

Tell that to Wegmans.

www.investigativepost.org/2026/01/09/w...

01.03.2026 14:24 👍 3 🔁 0 💬 1 📌 0

tl;dr Anthropic’s claim to moral superiority in its models is all performative PR bullshit.

The LLMs and knowledge base are trained on stolen content. There’s nothing ethical or responsible about that.

It’s mangled logic self-interested turtles all the way down.

01.03.2026 14:08 👍 5 🔁 0 💬 1 📌 0

Anthropic has built its normie brand around “responsible AI”. Now, the US government has declared the company a supply chain risk to national security. That designation also prevents other DoD contractors from doing business with Anthropic.

Keep an eye on its stock price when the markets open.

01.03.2026 14:03 👍 3 🔁 0 💬 2 📌 0
Post image

WSJ reporting that the U.S. used Claude for the air strikes in Iran. Centcom has been using Claude "for intelligence assessments, target identification and simulating battle scenarios" www.wsj.com/livecoverage...

01.03.2026 05:41 👍 2284 🔁 947 💬 156 📌 394

OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.

openai.com/index/our-ag...

01.03.2026 05:20 👍 2643 🔁 1164 💬 31 📌 73

your enemies fighting each other can be useful but it doesn't make them your friends. anthropic doesn't want to help with hegseth's apocalyptic fantasy *because they have their own*

you do not, under any circumstances, gotta hand it to them

28.02.2026 18:20 👍 505 🔁 116 💬 8 📌 3

Sam Altman picked a hell of a day to basically urge the world to trust the morality and legal restraint of the Department of Defense

28.02.2026 07:45 👍 3947 🔁 711 💬 34 📌 25

As an educator, the “it’s a dessert toppping/it’s a floor wax” argument that sticks in my mind:

If LLMs are intended to democratize knowledge so says Altman, then where does powering autonomous weapon systems advance epistemology in some Golden Age of (Agentic) Enlightenment?

28.02.2026 03:24 👍 12 🔁 2 💬 1 📌 0

As an educator, the “it’s a dessert toppping/it’s a floor wax” argument that sticks in my mind:

If LLMs are intended to democratize knowledge so says Altman, then where does powering autonomous weapon systems advance epistemology in some Golden Age of (Agentic) Enlightenment?

28.02.2026 03:20 👍 2 🔁 0 💬 0 📌 0
Preview
States Can Block the Paramount-Warner Deal - The American Prospect But thanks to some clever maneuvering, they are already running out of time.

"Paramount/Warner Bros is not a done deal," said California AG Rob Bonta last night. But he'll have to act fast, as Paramount's merger shepherds want to rapidly close the merger and commingle assets before states have a chance to challenge it. Up-to-the minute news on this terrible merger, from me:

27.02.2026 15:06 👍 388 🔁 171 💬 10 📌 13
Preview
Are AI-generated summaries suitable for studying and research? Despite didactic, ethical, and environmental concerns, the use of GenAI is on the rise in academia. For most applications, the jury is still out on whether and how they will benefit education and rese...

They recommend it for generating summaries…I just wrote this piece to convince students, researchers and educational staff that summaries are a bad use case: www.tue.nl/en/our-unive...

27.02.2026 05:21 👍 7 🔁 3 💬 1 📌 1

i’m tired of every paper (even critical ones) on societal impact of AI starting by both-siding the “benefits” and risks/harms of AI

in academia, i would like us to arrive at a collective reckoning that we don’t need to play both sides. it's totally legit to clearly state just the harms. period

26.02.2026 17:06 👍 1344 🔁 348 💬 41 📌 26