King's has several funded PhD projects in Safe, Trusted and Responsible AI. If you're a UK/home student, please check them out!
www.findaphd.com/phds/program...
I'm involved in 3 of the projects, links below.
@mackenziej
Postdoctoral Research Fellow, Northumbria University Research Volunteer for CETaS, The Alan Turing Institute Prev: King’s College London, Algorithm Audit, UK DSIT, HURIDOCS, Villanova University she/her https://mjorgen1.github.io
King's has several funded PhD projects in Safe, Trusted and Responsible AI. If you're a UK/home student, please check them out!
www.findaphd.com/phds/program...
I'm involved in 3 of the projects, links below.
Today and tomorrow, PROBabLE Futures is hosting the workshop:
“Under Investigation: Large Language Models for Policing”.
We’re proud to have Mackenzie Jorgensen. as a panel member—well done, Mackenzie, for contributing your expertise to this critical conversation on responsible innovation in policing.
Join us for our AI in Policing report launch next Tues (2 Dec), with:
🎙 @jenitennison.com
🎙 Dylan Alldridge
🎙 Dr @kenzjorg206
Tickets: bit.ly/44owxGT
📍 Macfarlanes
🕠 5:30pm registration
“Hey AI, send Jane Doe a Signal message!” Not a particularly good idea – and part of a paradigm shift currently taking place through Agentic AI, as @meredithmeredith.bsky.social told us.
Check out our blog on uncertainty, discrimination, and AI!
At the @kul-ai-sschool.bsky.social alumni events, I provided an application and ethical pov on a video AI panel. I also presented my postdoc research on uncertainty in AI-decision-making & the risks of discrimination. I’m so inspired by the AISS community and thankful for the organizers! 👏🏻
lol if you don’t realize that teaching computers is teaching about systems of power I don’t think you should be a professor at a university
A little over a week ago, I had the pleasure of giving talks at NYU and Villanova University on “Responsible AI-From a Fairness and Human Oversight Perspective.” The students’ questions were very thoughtful and it was a blast connecting with dear professors and colleagues!
Do you have Friday night plans in London on April 11th? If not, come join for a fantastic panel discussing AI & Journalism! I’ve really enjoyed co-organizing this event. Register here: www.turing.ac.uk/events/how-a...
I believe that the AI community can learn so much from the careful and intentional design principles of HCI research. This research prioritizes the critical consideration of different impacts of systems on affected groups when designing & deploying systems.
💡 Earlier this week, I spoke as an AI expert alongside Prof. Jonathan Lazar, Dr. Sanjay Modgil, & Prof. Yulan He on a panel at the Cockney CHAI gathering at KCL before CHI 2025. We discussed the crucial role of HCI in AI research and the ethical challenges with human-AI interactions today.
This is where you go to tell the National Park Service to restore the word "transgender" in its entry on the Stonewall monument.
(Scroll down.)
Amnesty Tech's Likhita Banerji and Damini Satija say as world leaders convene for the Paris AI Action Summit next week, they must confront a troubling reality: the deployment of AI in the public sector is exacerbating inequality and violating human rights.
Through EAAMO's Convos with Practitioners working group which I co-organize, I interviewed Yolanda at the Human Rights Information and Documentation Systems (HURIDOCS) NGO. The blog about our chat on tech and human rights worldwide is out! medium.com/eaamo/interv...
My interdisciplinary research paper on the legality of bias mitigation methods in the UK has been published! Check it out in the IEEE Technology and Society Magazine. ieeexplore.ieee.org/document/104...
I’m really excited to share my conversation with Brooke and Paige on their podcast covering all things Responsible AI, women in tech, and my nonprofit back in WA! open.spotify.com/episode/5QmC...
She gave an impt reminder that researchers and technologists cannot divorce algorithmic systems from the “history which has produced technology in the past that has disproportionately impacted marginalized groups.”
I interviewed Damini, the Head of Amnesty Tech’s Algorithmic Accountability Lab, and learned about the work her multidisciplinary team conducts in response to the increasing use of algorithmic systems in welfare provision. Check out the blog about our chat! #philtech
md4sg.medium.com/interview-wi...
“AI developed in one nation can impact the lives and livelihoods of billions of people around the world.” — Kamala Harris
"Civil society groups and the private sector also have an important role to play. Civil society groups advocate for the public interest. They hold the public and private sectors to account and are essential to the health and stability of our democracies." — Kamala Harris
“In the absence of regulation and strong oversight, some technology companies choose to prioritize profit over the wellbeing of their customers, the safety of our communities, and the stability of our democracy.” — Kamala Harris
Kamala Harris spoke today at the US Embassy in London on AI and the White House’s initiatives on it as laid out in their Executive Order. I’ll give some of my favorite quotes from her speech below. 👇🏻
Loved hearing Abeba Birhane speak at the AI & Society Forum keynote yesterday! One of the most interesting nuggets of knowledge she gave was that scaling up is the newest fad. This is dangerous because as datasets get bigger so do their problems as she’s investigated in her own research! #aifringe
Hello all philosophers of technology and anyone else. I set up a feed specifically for posts for people working in this area. Hopefully it'll work just like twitter. Just use "#philtech" and your post will show up under this feed. Anyone following this feed will see your post. Happy discussions! :)
Jack Stilgoe’s professorial inaugural lecture at UCL last night emphasized that we need to deeply think about who is behind the creation of emerging tech and who is benefiting from it. He also stressed the importance of independent science! Loved his showbiz references too. #sts #philtech #ai