Miranda Bogen's Avatar

Miranda Bogen

@mbogen

Director of the AI Governance Lab @cendemtech.bsky.social / responsible AI + policy

371
Followers
203
Following
16
Posts
18.11.2024
Joined
Posts Following

Latest posts by Miranda Bogen @mbogen

Preview
Risky Business: Advanced AI Companies’ Race for Revenue Companies developing advanced AI find themselves at a critical juncture. Some have transformed from research labs to product companies, others from social media platforms and internet giants to compet...

For deeper analysis on AI advertising and other business models, check out @cdt.org’s recent report Risky Business: Advanced AI Companies’ Race for Revenue cdt.org/insights/ris...

04.02.2026 15:16 👍 1 🔁 2 💬 0 📌 0

The choices that advanced AI companies make today about how they’ll cover the mind-boggling costs they are taking on to build AI systems will inevitably shape the systems themselves. That could have an enormous impact on our world for decades to come.

04.02.2026 15:16 👍 5 🔁 1 💬 1 📌 0

Anthropic’s announcement that it won’t incorporate ads into Claude engages honestly with the fact that advertising can cultivate deeply perverse incentives, even when platforms claim otherwise.

04.02.2026 15:16 👍 2 🔁 1 💬 1 📌 0
Preview
Claude is a space to think | Anthropic Anthropic explains why Claude will remain ad-free—how advertising incentives conflict with building a genuinely helpful AI assistant users can trust.

"There are many good places for advertising. A conversation with Claude is not one of them." www.anthropic.com/news/claude-...

04.02.2026 15:16 👍 5 🔁 2 💬 1 📌 0
Preview
What AI “remembers” about you is privacy’s next frontier Agents’ technical underpinnings create the potential for breaches that expose the entire mosaic of your life.

I've been surprised just how little we've been talking about privacy implications of frontier AI beyond training data. tl;dr -- the architecture of AI systems will matter a lot, and developers can act now to do better. New piece in @technologyreview.com: www.technologyreview.com/2026/01/28/1...

28.01.2026 15:52 👍 13 🔁 6 💬 0 📌 0
Preview
Risky Business: Advanced AI Companies’ Race for Revenue Companies developing advanced AI find themselves at a critical juncture. Some have transformed from research labs to product companies, others from social media platforms and internet giants to compet...

For deeper analysis, CDT’s recent report 𝐑𝐢𝐬𝐤𝐲 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬: 𝐀𝐝𝐯𝐚𝐧𝐜𝐞𝐝 𝐀𝐈 𝐂𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬’ 𝐑𝐚𝐜𝐞 𝐟𝐨𝐫 𝐑𝐞𝐯𝐞𝐧𝐮𝐞 explores the array of business models that advanced AI companies are currently implementing or considering, including advertising, and how they are likely to affect users. cdt.org/insights/ris... (5/5)

16.01.2026 19:50 👍 6 🔁 3 💬 0 📌 0

AI companies should be extremely careful not to repeat the many mistakes that have been made — and harms that have resulted from — the adoption of personalized ads on social media and around the web. (4/5)

16.01.2026 19:50 👍 4 🔁 2 💬 3 📌 0

People are using chatbots for all sorts of reasons, including as companions and advisors. There’s a lot at stake when that tool tries to exploit users’ trust to hawk advertisers’ goods. (3/5)

16.01.2026 19:50 👍 6 🔁 2 💬 1 📌 0

Even if AI platforms don’t share data directly with advertisers, business models based on targeted advertising put really dangerous incentives in place when it comes to user privacy. This decision raises real questions about how business models will shape AI in the long run. (2/5)

16.01.2026 19:50 👍 4 🔁 2 💬 1 📌 0

It's happening. OpenAI is piloting ads in ChatGPT. openai.com/index/our-ap...

In introducing ads to ChatGPT, OpenAI is starting down a risky path. (1/5)

16.01.2026 19:50 👍 9 🔁 7 💬 2 📌 4

And sure enough, OpenAI just announced it would be introducing ads to ChatGPT.

Good thing @mbogen.bsky.social & I wrote about the incentives this would create for AI companies, and how those incentives were likely to shape the user experience. TL;DR: it's not great!

#itsthebusinessmodel

16.01.2026 19:32 👍 49 🔁 29 💬 4 📌 2
Post image

a recent New York State audit of NYC's Local Law 144 — designed to ostensibly regulate potential bias and discrimination in automated employment tools — is fairly scathing in its assessment of how implementation and enforcement of the law is going.

simply put, LL 144 does not work.

08.01.2026 15:11 👍 4 🔁 7 💬 1 📌 1

New report from @mbogen.bsky.social & yours truly, on how the big AI companies are trying to make money and what it means for all of us.

I am more proud of the title than I have any right to be.

07.01.2026 20:36 👍 7 🔁 4 💬 0 📌 2
A Roadmap for Responsible Approaches to AI Memory.

A Roadmap for Responsible Approaches to AI Memory.

New from CDT: “A Roadmap for Responsible Approaches to AI Memory” by @mbogen.bsky.social & Ruchika Joshi explores how AI systems store, recall, and use info—and what that means for privacy, transparency, and user control. cdt.org/insights/a-r...

12.12.2025 01:30 👍 3 🔁 2 💬 1 📌 0

[NeurIPS '25] Our oral slot and poster session on "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research" are tomorrow, December 4! [https://arxiv.org/abs/2412.06966]

Oral: 3:30-4pm PST, Upper Level Ballroom 20AB

Poster 1307: 4:30:-7:30pm PST, Exhibit Hall C-E

03.12.2025 20:57 👍 3 🔁 2 💬 1 📌 0

This sets a dangerous precedent for AI more broadly: without guardrails to avoid harmful outcomes, a huge variety of decisions impacting people’s financial stability, health & liberty will reflect histories of overt discrimination, a resurgence that will be disguised under an illusion of neutrality.

14.11.2025 17:59 👍 2 🔁 1 💬 0 📌 0

The CFPB is responsible for addressing AI’s role in credit discrimination, and this proposed rule disregards that responsibility. The agency should instead direct its efforts to ensuring creditors implement fairness testing to help them prevent discrimination when adopting AI.

14.11.2025 17:59 👍 1 🔁 1 💬 1 📌 1

Disparate impact is particularly important as AI is increasingly used in making fundamental decisions. Bias in AI’s training and design degrades its performance for certain protected groups, even without overt discriminatory intent. Disparate impact recognizes this.

14.11.2025 17:59 👍 1 🔁 2 💬 1 📌 0

AI itself doesn’t have “intent,” and people have no real transparency regarding how creditors use AI in any aspects of credit transactions. This makes it incredibly difficult, if not impossible, to show that AI was used to intentionally discriminate against an applicant.

14.11.2025 17:59 👍 1 🔁 1 💬 1 📌 0
Preview
Equal Credit Opportunity Act (Regulation B) The Consumer Financial Protection Bureau (Bureau or CFPB) is issuing a proposed rule for public comment that amends provisions related to disparate impact, discouragement of applicants or prospective ...

The CFPB proposed a new rule where it would no longer recognize disparate impact liability when enforcing the Equal Credit Opportunity Act. This would eliminate a key protection against discrimination in access to credit, including when AI is involved.
www.federalregister.gov/documents/20...

14.11.2025 17:59 👍 5 🔁 3 💬 1 📌 1
Post image

🚨Call for policy proposals

If AI adoption is not slowing down, policy governing safety and security practices needs to speed up. This is where you come in.

16.10.2025 14:42 👍 5 🔁 4 💬 1 📌 1
Post image Post image Post image Post image

AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:

22.07.2025 00:49 👍 14 🔁 6 💬 1 📌 0
Preview
Personalized AI is rerunning the worst part of social media's playbook The incentives, risks, and complications of AI that knows you

AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well.

In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes

21.07.2025 18:32 👍 14 🔁 5 💬 0 📌 3
Preview
It’s (Getting) Personal: How Advanced AI Systems Are Personalized This brief was co-authored by Princess Sampson. Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search e...

Personalization is political. Very excited to share a piece I co-authored with @mbogen.bsky.social as a Google Public Policy Fellow @cendemtech.bsky.social!

cdt.org/insights/its...

05.05.2025 16:51 👍 16 🔁 4 💬 1 📌 1
Preview
OpenAI slashes AI model safety testing time Testers have raised concerns that its technology is being rushed out without sufficient safeguards

From CDT’s @mbogen.bsky.social: “As #AI companies are racing to put out increasingly advanced systems, they also seem to be cutting more and more corners on safety, which doesn’t add up.” www.ft.com/content/8...

11.04.2025 18:29 👍 22 🔁 12 💬 1 📌 0
Preview
Adopting More Holistic Approaches to Assess the Impacts of AI Systems by Evani Radiya-Dixit, CDT Summer Fellow As artificial intelligence (AI) continues to advance and gain widespread adoption, the topic of how to hold developers and deployers accountable for the AI systems they implement remains pivotal. Assessments of the risks and impacts of AI systems tend to evaluate a system’s outcomes or performance through methods like […]

To truly understand AI’s risks & impacts, we need sociotechnical frameworks that connect the technical with the societal. Holistic assessments can guide responsible AI deployment & safeguard safety and rights.

📖 Read more: cdt.org/insights/ado...

16.01.2025 17:47 👍 6 🔁 2 💬 0 📌 0
Preview
Hypothesis Testing for AI Audits Introduction AI systems are used in a range of settings, from low-stakes scenarios like recommending movies based on a user’s viewing history to high-stakes areas such as employment, healthcare, finance, and autonomous vehicles. These systems can offer a variety of benefits, but they do not always behave as intended. For instance, ChatGPT has demonstrated bias […]

CDT’s Amy Winecoff + @mbogen.bsky.social new explainer dives into the fundamentals of hypothesis testing, how auditors can apply it to AI systems, & where it might fall short. Using simulations, we show its role in detecting bias in a hypothetical hiring algorithm. cdt.org/insights/hyp...

16.01.2025 19:23 👍 9 🔁 3 💬 1 📌 0
Graphic for CDT AI Gov Lab's report, "Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems." Illustration of a collection of AI "tools" and "toolbox" – a hammer and red toolbox – and a stack of checklists with a pencil.

Graphic for CDT AI Gov Lab's report, "Assessing AI: Surveying the Spectrum of Approaches to Understanding and Auditing AI Systems." Illustration of a collection of AI "tools" and "toolbox" – a hammer and red toolbox – and a stack of checklists with a pencil.

NEW REPORT: CDT AI Governance Lab’s’s Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability. cdt.org/insights/ass... cc: @mbogen.bsky.social

16.01.2025 17:37 👍 9 🔁 3 💬 1 📌 0
Preview
Upturn Seeks a Research Associate This position is ideal for someone who is excited about sharp, interdisciplinary research on a range of topics related to technology, policy, and justice.

@upturn.org is hiring for a research associate! Excellent opportunity to work with some fantastic folks! www.upturn.org/join/researc...

17.12.2024 13:13 👍 9 🔁 5 💬 1 📌 0
Post image

howdy!

the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.

i hope you give it a read — the article is just the beginning of this line of work.

www.law.georgetown.edu/georgetown-l...

18.11.2024 16:40 👍 51 🔁 15 💬 4 📌 5