Help us build a Trust and Fairness Agenda! Iβm excited to hear everyoneβs idea!
Help us build a Trust and Fairness Agenda! Iβm excited to hear everyoneβs idea!
Excited to join tonight!
Got great energy x AI policy ideas! Submit a proposal, and we will help you cultivate it from a seedling idea to a full-fledged memo with an advocacy plan!
π¨POLICY WRITING OPPORTUNITYπ¨
Weβre looking for forward-thinking policy ideas on:
π AI applications for grid management and modernization
β‘ AI and energy related R&D
π Measuring and managing the resource consumption of AI and data centers
π’ APPLY BY FEB 28
fas.org/accelerator/...
If anyoneβs parent watches youtube shorts, here is me, pretending to be on date and talking about PETs. youtube.com/shorts/JtQfF...
12/ Transparency isnβt just good policyβitβs essential for AI use in government.
11/ Bottom line:β¨π If the AI Use Case Inventory disappears, we lose a critical tool for public trust.β¨π More AI in government + less transparency = a perfect storm for harm and erosion of trust in emerging tech.
10/ Case in point: The Netherlands β AI-driven fraud detection system wrongly cut off benefits to vulnerable families.
Without transparency, thereβs little oversight to prevent similar failures in the U.S. www.politico.eu/article/dutc...
9/ But what happens if Trump II abolishes or weakens the inventory?
Itβs already clear the administration plans to increase AI use for fraud detectionβone of the riskiest AI applications. www.nytimes.com/2025/02/03/t...
Why does this matter? The AI Use Case Inventory is one of the most important transparency tools for AI in government.
It allows civil society to track federal AI deployments, identify risks, and hold agencies accountable. Without it, the public is left in the dark
7/ If an AI system impacts rights or safety, agencies must also disclose:β¨
β οΈ Risk management & independent evaluationsβ¨
β οΈ Potential harm & mitigation efforts
β¨β οΈ Whether people can opt-out in favor of a human decision-maker
6/ This new guidance required agencies to report much more information, including:
π Intended purpose & expected benefitsβ¨
π AI system outputs & development detailsβ¨
π Privacy, bias, & safety risksβ¨
π Transparency measures & public impact
5/ Originally, agencies reported basic details on their AI use cases. But the Biden administration greatly expanded this. Under OMB Guidance M-24-10, Biden broadened the definition of AI (aligning with the John S. McCain NDAA of 2019).
4/ Most importantly, EO 13960 created the AI Use Case Inventory, a transparency tool requiring agencies to disclose AI systems they use or plan to use.
3/ To implement this, EO 13960 directed:β¨
π OMB to create a policy roadmap for AI adoptionβ¨
π Agencies to inventory their AI use casesβ¨
π GSA to recruit AI experts via the Presidential Innovation Fellows programβ¨
π OPM to explore rotational programs for AI expertise
2/ In 2020, Trump issued EO 13960, setting 9 guiding principles for AI in federal agenciesβprioritizing lawful, effective, secure, transparent, and accountable AI use.
1/ We donβt know exactly what Trump II will do, but itβs shaping up to be very different from Trump I. So letβs look at how the first Trump administration approached AI in governmentβand what that could mean for the AI Use Case Inventory.
A key part of these memos? The AI Use Case Inventory, which in the final days of the Biden Admin documented 1,700 federal AI use cases. π§΅
π¨ Within the next 60 days (now much less), the Trump Administration will review OMB Guidance M-24-10 & M-24-18, which lay out how the federal government should use, acquire, and manage AI.
As long as nuclear weapons exist, nuclear war remains possible. And all nuclear weapons states are undergoing nuclear weapons modernization programs.
π¨ SCIPOL FELLOWSHIP OPPORTUNITYπ¨
Team FAS is looking for senior fellows to advance innovative policy and drive positive change. If youβre a leading light in your field and are ready to shape policy discourse and implementation, we want you for Team FAS.
π
Apply by Jan 31
fas.org/career/senio...
The federal govtβs increasing reliance on CAI/PII is outpacing its ability to regulate it β putting your data in the wrong hands.
As AI systems become increasingly integrated into government processes, protecting fundamental constitutional rights cannot be an afterthought.
fas.org/publication/...
Recommendation 3. Build Government Capacity for the Use of Privacy Enhancing Technologies to Bolster Anonymization Techniques
Recommendation 2. Expand Privacy Impact Assessments (PIA) to Incorporate Additional Requirements and Periodic Evaluations
FedRAMP should add CAI/PII to the mix, requiring datasets be assessed on the following information (see screenshot)
Bonus: FedRAMP authorizations are strictly enforced, offering a level of rigor that voluntary assessments just canβt match
The Federal Risk and Authorization Management Program, lovingly known as FedRAMP, already has a mandate to ensure the security of cloud service providers for the federal government uses, and that mandate has recently been expanded to AI technologies.
When a federal agency wants new software, say, for example, cloud management software, it has to make a series of assessments and justifications to procure and implement it. Why not do this for datasets too?
(this one is wonky, stay with us here)
Recommendation 1. Enable FedRAMP to Create Authorization System for Third-Party Data Sources
But without statutory protections, it is incumbent on the executive branch to craft clear guidance. The Office of Management and Budget asked for help on creating such guidance. We happily olbiged.
On the legislative side, the Fourth Amendment is Not for Sale Act (H.R.4639) would bar technology providers from sharing customer records with anyone, including federal agencies, but the bill has stalled in the Senate.