Victor Ojewale's Avatar

Victor Ojewale

@victorojewale

PhDing @BrownCS | Algorithm auditing & accountability | Understanding Algorithmic Systems | victorojewale.github.io/ | https://victorojewale.substack.com/

211
Followers
58
Following
11
Posts
21.11.2024
Joined
Posts Following

Latest posts by Victor Ojewale @victorojewale

CNTR AISLE CNTR AISLE Portal

It's been a journey of nearly 3 years, but I'm very excited to announce the CNTR AISLE Portal! πŸš€ cntr-aisle.org It’s a new way to review and evaluate the 1,000+ AI bills introduced in the U.S. over the last three years. Check out the Bill Library and our Profiles#AIPolicy #OpenData

02.03.2026 17:00 πŸ‘ 24 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0
Preview
Caring for yourself and each other Resources for the Brown community, friends, family, loved ones and how to support us

New post by @michelleding.bsky.social on resources for the Brown community in the aftermath of the shooting. open.substack.com/pub/michelle...

21.12.2025 03:51 πŸ‘ 8 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
Post image

Serena Booth (@reniebird.bsky.social) reflects on the challenge of collecting human preferences to steer AI systems and wonders if we are doing it all wrong. cntr.brown.edu/news/2025-11...

This is part 1 of 3 of CNTR researcher reflections on COLM 2025.

19.11.2025 13:36 πŸ‘ 10 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

Loving the illustrations!!!

20.10.2025 15:43 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Technologies like synthetic data, evaluations, and red-teaming are often framed as enhancing AI privacy and safety. But what if their effects lie elsewhere?

In a new paper with @realbrianjudge.bsky.social at #EAAMO25, we pull back the curtain on AI safety's toolkit. (1/n)

arxiv.org/pdf/2509.22872

17.10.2025 21:09 πŸ‘ 17 πŸ” 6 πŸ’¬ 1 πŸ“Œ 1
Post image

πŸ’‘We kicked off the SoLaR workshop at #COLM2025 with a great opinion talk by @michelleding.bsky.social & Jo Gasior Kavishe (joint work with @victorojewale.bsky.social and
@geomblog.bsky.social
) on "Testing LLMs in a sandbox isn't responsible. Focusing on community use and needs is."

10.10.2025 14:31 πŸ‘ 15 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Preview
Testing LLMs in a sandbox isn’t responsible. Focusing on community use and needs is.

Read our full opinion abstract here: cntr.brown.edu/news/2025-09...

09.10.2025 19:33 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025 COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada

Hi #COLM2025! πŸ‡¨πŸ‡¦ I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!

09.10.2025 19:32 πŸ‘ 9 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0
Screenshot of paper title and author list: 

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling
Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji

Screenshot of paper title and author list: Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling Victor Ojewale, Ryan Steed, Briana Vecchione, Abeba Birhane, Inioluwa Deborah Raji

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling by @victorojewale.bsky.social @rbsteed.com @briana-v.bsky.social @abeba.bsky.social @rajiinio.bsky.social compares the landscape of AI audit tools (tools.auditing-ai.com) to the actual needs of AI auditors.

24.07.2025 19:52 πŸ‘ 23 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Preview
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.

Very excited to see this piece out in @techpolicypress.bsky.social today. This was written together with @r-jy.bsky.social and Kate Elizabeth Creasey (a historian here at Brown), and calls out what we think is a scary and interesting rhetorical shift.

www.techpolicy.press/sovereignty-...

07.07.2025 13:50 πŸ‘ 21 πŸ” 8 πŸ’¬ 0 πŸ“Œ 4

Join us for the Eval Eval Coalition Social at @facct.bsky.social tomorrow Tuesday June 24th from 4-4:30 pm during the coffee break! We would love to have you join us and we look forward to seeing you there!! #FAccT2025 #EvalEval

23.06.2025 14:41 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 1

I'm incredibly fortunate to have had the opportunity to work with this team. Truly one of the best collaborative experiences I have had to date (special s/o to our MVP @mkgerchick.bsky.social for leading this)!

Check out Marissa's talk on our paper "auditing the audits" if you're at #FAccT2025!

⬇️

23.06.2025 12:51 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1

The welcome event and first keynote are getting started! πŸŽ‰πŸ¦‰

Our first keynote is by Suresh Venkatasubramanian (Brown University) @geomblog.bsky.social

"Are we winning yet? FAccT, AI governance, and the shape of what comes next."

#FAccT2025

23.06.2025 06:16 πŸ‘ 12 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1

It was an honor and a pleasure to deliver this keynote. And perhaps one of the hardest talks I've had to design.

23.06.2025 08:52 πŸ‘ 7 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Please come see us at the RC Trust Networking Event!
You can sign up with the QR Codes around the venues and get some free drinks! πŸ™‚β€β†•οΈ

#FAccT2025

23.06.2025 13:23 πŸ‘ 4 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

Reminder that we have multiple social events happening this evening, all happening *outside* the conference venue!

Social: RC-Trust Networking Reception for the FAccT Community

Social: Generative AI Risks + Red-Teaming

Social: AI Workers' Inquiry

facctconference.org/2025/sociale...

23.06.2025 13:19 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1

The deadline for proposing a social for #FAccT2025 has been extended! Get your proposals in by Wednesday, May 21.

facctconference.org/2025/cfsocial

Social Chairs will provide assistance for accepted socials with coordination, translation, suggested venues, + other logistics to support your event.

16.05.2025 14:02 πŸ‘ 4 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

The deadline to submit proposals for #FAccT2025 socials is tomorrow, May 9th! Please send us your ideas!

08.05.2025 20:14 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Super grateful for the mentorship from you and the OAT team Deb!!!

cc: @rbsteed.com @briana-v.bsky.social @abeba.bsky.social

02.05.2025 15:23 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

So proud of Victor for attending his first CHI and presenting this work with such skill & confidence :)

It remains such a joy to be part of his academic journey!

30.04.2025 00:42 πŸ‘ 11 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

go check out our paper if you're at #CHI2025

28.04.2025 12:42 πŸ‘ 16 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

Proud to have worked on this with the dream team!

@rbsteed.com , @briana-v.bsky.social , @abeba.bsky.social , and @rajiinio.bsky.social .

Come for the talk! #CHI2025

P.S. Needed an excuse to post a picture of the Yokohama skyline lol

28.04.2025 11:26 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We argue for moving beyond ad-hoc toolkits to a common, sustainable AI accountability infrastructure. This requires a concerted effort from academia, industry, and policy makers to develop tools to support not only evaluation but also advocacy and policy change.

28.04.2025 11:26 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

We taxonomized these tools based on their use in different stages of auditingβ€”such as Harms Discovery, Transparency Infrastructure, & Audit Communciationβ€”highlighting areas of oversaturation or neglect.

28.04.2025 11:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Through interviews with 35 practitioners and analysis of 450+ AI audit tools, we identify gaps between current tooling and effective accountability.

Despite many evaluation tools, there’s a need for systems supporting the full audit cycle, especially harms discovery and stakeholder communication

28.04.2025 11:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Excited to present "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" at #CHI2025 tomorrow(today)!

πŸ—“ Tue, 29 Apr | 9:48–10:00 AM JST (Mon, 28 Apr | 8:48–9:00 PM ET)
πŸ“ G401 (Pacifico North 4F)

πŸ“„ dl.acm.org/doi/10.1145/...

28.04.2025 11:26 πŸ‘ 22 πŸ” 8 πŸ’¬ 2 πŸ“Œ 2

Hahaha, good one 🀣

28.04.2025 01:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yaaaay
Welcome to Brown πŸŽ‰πŸŽ‰πŸŽ‰

26.04.2025 16:07 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults In this paper, we adopt a survivor-centered approach to locate and dissect the role of sociotechnical AI governance in preventing AI-Generated Non-Consensual Intimate Images (AIG-NCII) of adults, coll...

Excited to be presenting a new paper with @harinisuresh.bsky.social on the extremely critical topic of technical prevention/governance of adult AI generated non-consensual intimate images aka "deepfake pornography" at #CHI2025 chi-staig.github.io on 4/27 10:15-11:15 JST arxiv.org/abs/2504.17663 🧡

25.04.2025 17:41 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 2