Ollie Stephenson's Avatar

Ollie Stephenson

@technolliegist

Associate Director of AI and Emerging Technology Policy, @scientistsorg.bsky.social. Views are my own.

46
Followers
38
Following
22
Posts
20.11.2024
Joined
Posts Following

Latest posts by Ollie Stephenson @technolliegist

Preview
Senior Manager, AI Safety and Security Policy As Senior Manager, AI Safety and Security Policy, you will drive ambitious efforts to turn cutting-edge technical insights into real policy impact—shaping how the U.S. anticipates and manages the chal...

If you want to help shape how the U.S. anticipates and governs advanced AI and make a future that's safer for everyone, I'd encourage you to take a look.

Apply by December 15. More info and application form available here:
fas.org/career/senio...

18.11.2025 17:16 👍 1 🔁 0 💬 0 📌 0
Post image Post image

I’m hiring a Senior Manager for AI Safety & Security Policy at FAS.

You’ll help turn technical insights about frontier AI risks into real policy outcomes—spotting windows for impact, shaping proposals decision-makers can use, and working directly with researchers + government.

18.11.2025 17:16 👍 2 🔁 2 💬 1 📌 0
Post image

Want to learn about how AI is being integrated into military decision-making and connect with experts working on this issue?

Register for our reception on November 19th, in partnership with the @futureoflife.org

🔗 luma.com/2zkutj53

04.11.2025 17:48 👍 4 🔁 2 💬 1 📌 0
Preview
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work? Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.

Anthropic says its AI won't help you build a nuclear weapon. Will it work? And can a chatbot even help build a nuke?

20.10.2025 14:04 👍 20 🔁 12 💬 3 📌 10

AI-powered nukes might not be coming next week, but "we don’t know where they’ll be in five years time … and it’s worth being prudent about that fact," @technolliegist.bsky.social tells @wired.com

20.10.2025 14:12 👍 6 🔁 5 💬 0 📌 0

Going to #Abundance2025? Say hi! I’ll be there with plenty of my @scientistsorg.bsky.social colleagues. Come to me with any questions about artificial intelligence policy, and if you’re curious about clean energy, R&D innovation, or anything else, I’ll direct you to the geniuses I work with.

03.09.2025 20:01 👍 0 🔁 0 💬 0 📌 0
Post image

AI doesn’t live in the cloud. It runs on land, water, and electricity. We're challenging the idea that AI is clean or green.

Vote now for "Who Pays for AI?" 🗳️ participate.sxsw.com/flow/sxsw/sx...

13.08.2025 13:59 👍 6 🔁 5 💬 0 📌 1

11/n At FAS we’ll keep working with scientists & policymakers to craft AI policy that serves everyone.

25.07.2025 03:46 👍 4 🔁 1 💬 0 📌 0

10/n 🔎 Bottom Line: To reap AI’s benefits we must trust it—we need more research, careful adoption & strong guardrails for high‑risk uses. The plan has bright spots but backslides on bias & climate and collides with deep staffing/funding cuts in government.

25.07.2025 03:46 👍 4 🔁 1 💬 1 📌 0
Preview
POLICY SPRINT: AI & Energy From using AI to optimize power grids to accelerating clean energy R&D, AI holds huge potential, while also introducing new challenges related to climate, equity, infrastructure, security, and sustain...

9/n Also disappointing: deleting climate‑change references. AI uses a lot of energy and we can’t manage what we don’t measure. Our AI & Energy Policy Sprint shows how to track AI’s footprint and use AI to fight climate change: fas.org/accelerator/...

25.07.2025 03:46 👍 4 🔁 1 💬 1 📌 0

8/n ❌ The Ugly:
AI bias is real & measurable. Yet the plan tells NIST to drop “diversity, equity & inclusion” from its AI Risk Mgmt Framework and requires federal models be “free from ideological bias.” Lots depends on implementation but this is hiding real problems.

25.07.2025 03:46 👍 5 🔁 2 💬 1 📌 0

7/n Without national regs, state experiments are how we learn what responsible AI looks like. A regulatory Wild West won’t build public trust.

25.07.2025 03:46 👍 4 🔁 1 💬 1 📌 0
Post image

6/n ⚠️ The Bad
Last month the Senate stripped a clause from OBBBA that would have restricted state AI rules. The plan tries again to block state guardrails even as Congress sets no federal standard.

25.07.2025 03:46 👍 4 🔁 1 💬 1 📌 0
Preview
Focused Research Organizations - Federation of American Scientists Not all scientific challenges can be met by academia and industry. This is where Focused Research Organizations can bridge the gap.

5/n ➡️ Focused Research Organizations (FROs): They tackle narrow, high‑impact problems that are a poor fit for startups. FAS first championed FROs in 2020, and we think this is their first federal embrace. We've publish a list of promising FRO ideas here: fas.org/initiative/f...

25.07.2025 03:46 👍 3 🔁 1 💬 1 📌 0

4/n ➡️ Security measures: Steps on cybersecurity, biosecurity, secure‑by‑design AI & incident response aim to stop harms before they freeze innovation.

25.07.2025 03:46 👍 2 🔁 1 💬 1 📌 0
Post image

3/n ➡️ Broad R&D agenda: Beyond interpretability, the plan backs research on robustness, controllability, new AI paradigms & an evaluation ecosystem.

25.07.2025 03:46 👍 2 🔁 1 💬 1 📌 0
Preview
Accelerating AI Interpretability If AI systems are not always reliable and secure, this could inhibit their adoption, especially in high-stakes scenarios, potentially compromising American AI leadership.

2/n 🚀 The Good
➡️ Interpretability: We need to see inside AI's black box. With FAS AI Fellow Matteo Pistillo, we've drafted a federal roadmap to advance AI interpretability: fas.org/publication/...

25.07.2025 03:46 👍 4 🔁 2 💬 1 📌 0
Post image

1/n When the Trump admin began drafting its AI Action Plan, we at the Federation of American Scientists (@scientistsorg.bsky.social) offered ideas to advance innovation, maintain safety and security, and support government institutions. Now the plan is live, here’s my take:

25.07.2025 03:46 👍 3 🔁 4 💬 1 📌 0
The Future at Stake: AI and Nuclear Weapons | Physicians for Social Responsibility Register Join Physicians for Social Responsibility for a panel webinar exploring how artificial intelligence impacts nuclear weapons systems, security, and dialogues. This follow-up to January's succe...

Registration: psr.org/event/the-fu...

27.04.2025 00:53 👍 1 🔁 0 💬 0 📌 0
Cover image for the event showing a robot hand hovering over a nuclear button.

Cover image for the event showing a robot hand hovering over a nuclear button.

☢️ Safeguarding Nuclear Command and Control in the Age of AI ☢️

I’ll be speaking Monday at 12pm ET on how AI might impact nuclear risks. See registration link below if you'd like to join!

27.04.2025 00:53 👍 3 🔁 2 💬 1 📌 0
Preview
HIRING: AI and Emerging Tech Manager FAS seeks an AI & Emerging Tech Manager to implement and support a diverse set of projects within the broader Emerging Tech & Competitiveness policy portfolio.

🚨 Hiring: AI & Emerging Tech Manager @scientistsorg.bsky.social 🚨

Shape U.S. #AI policy—drive AI equity work, build S&T talent pipelines, tackle AI-safety & energy projects.

💼 $70k–$87.5k | Hybrid DC (2-3 days in office).
Apply soon, ideally by May 5! → fas.org/career/ai-an...

25.04.2025 22:13 👍 2 🔁 1 💬 0 📌 0

Great opportunity to develop concrete policy ideas around AI, energy, and environment!

21.02.2025 21:59 👍 1 🔁 0 💬 0 📌 0
Preview
How DeepSeek is looming over the Paris AI summit

www.politico.com/newsletters/...

11.02.2025 16:43 👍 0 🔁 0 💬 0 📌 0
Text saying: "“What DeepSeek has really done is capture public attention in a way that I haven’t really seen since maybe ChatGPT,” said Oliver Stephenson, the associate director for AI and emerging tech policy at the Federation of American Scientists. “That really boils through into how policymakers are paying attention, and that just shifts the entire ecosystem of Washington, D.C., and policymakers around the world to really focus again on this as a thing that they need to be paying attention to.”"

Text saying: "“What DeepSeek has really done is capture public attention in a way that I haven’t really seen since maybe ChatGPT,” said Oliver Stephenson, the associate director for AI and emerging tech policy at the Federation of American Scientists. “That really boils through into how policymakers are paying attention, and that just shifts the entire ecosystem of Washington, D.C., and policymakers around the world to really focus again on this as a thing that they need to be paying attention to.”"

As world leaders meet in Paris, I spoke with @politico.com about DeepSeek and its impact on AI policy discussions.

11.02.2025 16:43 👍 2 🔁 1 💬 1 📌 0

🚨 Within the next 60 days (now much less), the Trump Administration will review OMB Guidance M-24-10 & M-24-18, which lay out how the federal government should use, acquire, and manage AI.

10.02.2025 21:36 👍 9 🔁 3 💬 1 📌 3

How should we manage AI's growing resource consumption, and use AI to promote clean energy? @scientistsorg.bsky.social wants to hear your ideas! Find out more below.

03.02.2025 18:51 👍 3 🔁 1 💬 0 📌 0
Post image

ATTENTION NUKE NERDS: We’re hosting a one-week, in-person OSINT bootcamp to teach a new generation of open-source nuke investigators. If you’re an early- to mid-career nuclear weapons analyst, this bootcamp is calling for you.

Apply today ▶️ fas.org/osint-bootcamp-2025/

03.02.2025 14:26 👍 27 🔁 21 💬 1 📌 6
Preview
Is the DeepSeek Panic Overblown? AI scientists contend that the outsize reaction to the rise of the Chinese AI company DeepSeek is misguided.

“What we're seeing is an impressive technical breakthrough built on top of Nvidia's product that gets better as you use more of Nvidia's product...That does not seem like a situation in which you're going to see less demand for Nvidia's product.”
time.com/7211646/is-d... @scientistsorg.bsky.social

31.01.2025 23:28 👍 2 🔁 0 💬 0 📌 0