Civic and Responsible AI Lab (CRAIL)'s Avatar

Civic and Responsible AI Lab (CRAIL)

@civicandresponsibleai.com

A research lab working towards Responsible AI (and Robotics), and the use of AI for civil society and empowerment. Based at King's College London, UK. Led by Martim Brandao (@martimbrandao.bsky.social). Website: https://www.civicandresponsibleai.com/

35
Followers
136
Following
7
Posts
13.09.2025
Joined
Posts Following

Latest posts by Civic and Responsible AI Lab (CRAIL) @civicandresponsibleai.com

Preview
Leveraging collaborative XAI for racism detection and explanation in political and media discourse at King’s College London on FindAPhD.com PhD Project - Leveraging collaborative XAI for racism detection and explanation in political and media discourse at King’s College London, listed on FindAPhD.com

Funded PhD Project 3: "Leveraging collaborative XAI for racism detection and explanation in political and media discourse", with Prof. Nicola Rollock

www.findaphd.com/phds/project...

19.01.2026 12:50 👍 0 🔁 1 💬 0 📌 0
Preview
Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics at King’s College London on FindAPhD.com PhD Project - Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics at King’s College London, listed on FindAPhD.com

Funded PhD Project 2: "Work, Employment and Robots: Investigating Working Conditions in the Supply Chain of Robotics", with Funda Ustek Spilda @fundaustek.bsky.social

www.findaphd.com/phds/project...

19.01.2026 12:49 👍 0 🔁 2 💬 1 📌 0

Funded PhD Project 1: "AI for AI Oversight? Evaluating and monitoring corporate AI risks using publicly available data", with Claudia Aradau @cearadau.bsky.social

www.findaphd.com/phds/project...

19.01.2026 12:47 👍 1 🔁 2 💬 1 📌 0
Preview
GeoDataMonitor: Towards monitoring usage of geospatial datasets in machine learning models at King’s College London on FindAPhD.com PhD Project - GeoDataMonitor: Towards monitoring usage of geospatial datasets in machine learning models at King’s College London, listed on FindAPhD.com

Fully-funded #PhDPosition, between KCL @civicandresponsibleai.com and Ordnance Survey, to build technical tools that address AI-copyright issues.

www.findaphd.com/phds/project...

Deadline: Feb 27.
Eligibility: UK/home students or exceptional international students.

#AISafety #ResponsibleAI

29.01.2026 10:34 👍 1 🔁 1 💬 0 📌 1
Post image

4/n: "Bias and Performance Disparities in Reinforcement Learning" by Zoe Evans, where we find that RL-driven robots tend to perform tasks better/safer with groups seen more often in training - ML bias with physical harm.
#HRI2025 #robots #bias #ResponsibleAI
doi.org/10.1109/HRI6...

19.12.2025 17:04 👍 0 🔁 1 💬 0 📌 0
Post image

3/n: "Should Delivery Robots Intervene if They Witness Civilian or Police Violence?" by T.Seassau surveys public opinion on this question, finding mixed support for intervention, and a need to codesign w/ victims of (police) violence. @tomwilliams.phd
#ROMAN2025 #HRI #robots
doi.org/10.1109/ro-m...

19.12.2025 12:51 👍 2 🔁 2 💬 1 📌 0
Post image

2025 papers 2/n: "Robot arms too short?" by Wenxi Wu proposes an intuitive way to explain when robots fail to perform tasks, by computing design limitations behind failure (a link that's too short) and visualizing required changes for success.
#ROMAN2025 #HRI #robots #AI #XAI
doi.org/10.1109/ro-m...

19.12.2025 12:34 👍 0 🔁 1 💬 1 📌 0
Preview
Harvesting Perspectives: A Worker-Centered Inquiry into the Future of Fruit-Picking Farm Robots The integration of robotics in agriculture presents promising solutions to challenges such as labour shortages and increasing global food demand. However, existing visions of agriculture robots often ...

Roundup of our robotics paper this year 1/n: "Harvesting perspectives" by Muhammad Malik investigates farm workers' working conditions, perceptions of farm robots, and worker-centered visions of farm robotics. #ROMAN2025 #HRI #robots #AI #ResponsibleAI
doi.org/10.1109/ro-m...

18.12.2025 12:12 👍 0 🔁 1 💬 1 📌 0
Preview
Robots powered by popular AI models risk encouraging discrimination and violence | King's College London Robots powered by popular AI models are currently unsafe for real-world use.

Robots powered by popular AI models are currently unsafe for general purpose real-world use.

Researchers from @kingsnmes.bsky.social & @cmu.edu evaluated how robots that use large language models (LLMs) behave when they have access to personal information.

www.kcl.ac.uk/news/robots-...

11.11.2025 15:37 👍 7 🔁 5 💬 0 📌 1
First page of paper "Embodied AI at the Margins: Postcolonial Ethics for Intelligent Robotic Systems".
Abstract: As AI-powered robots increasingly permeate global societies, critical questions emerge about their ethical governance in diverse cultural contexts. This paper interrogates the adequacy of dominant roboethics frameworks when applied to Global South environments, where unique sociotechnical landscapes demand a reevaluation of Western-centric ethical assumptions. Through thematic analysis of seven major ethical standards for AI and robotics, we uncover systemic limitations that present challenges in non-Western contexts such as assumptions about standardized testing infrastructures, individualistic notions of autonomy, and universalized ethical principles. The uncritical adoption of these frameworks risks reproducing colonial power dynamics in which technological authority flows from centers of AI production rather than from the communities most affected by deployment. Instead of replacing existing frameworks entirely, we propose augmenting them through four complementary ethical dimensions developed through a postcolonial lens: epistemic non-imposition, onto-contextual consistency, agentic boundaries, and embodied spatial justice. These principles provide conceptual scaffolding for technological governance that respects indigenous knowledge systems, preserves cultural coherence, accounts for communal decision structures, and enhances substantive capabilities for Global South communities. The paper demonstrates practical implementation pathways for these principles across technological life cycles, offering actionable guidance for dataset curation, task design, and deployment protocols that mitigate power asymmetries in cross-cultural robotics implementation. This approach moves beyond surface-level adaptation to reconceptualize how robotic systems may ethically function within the complex social ecologies of the Global South while fostering genuine...

First page of paper "Embodied AI at the Margins: Postcolonial Ethics for Intelligent Robotic Systems". Abstract: As AI-powered robots increasingly permeate global societies, critical questions emerge about their ethical governance in diverse cultural contexts. This paper interrogates the adequacy of dominant roboethics frameworks when applied to Global South environments, where unique sociotechnical landscapes demand a reevaluation of Western-centric ethical assumptions. Through thematic analysis of seven major ethical standards for AI and robotics, we uncover systemic limitations that present challenges in non-Western contexts such as assumptions about standardized testing infrastructures, individualistic notions of autonomy, and universalized ethical principles. The uncritical adoption of these frameworks risks reproducing colonial power dynamics in which technological authority flows from centers of AI production rather than from the communities most affected by deployment. Instead of replacing existing frameworks entirely, we propose augmenting them through four complementary ethical dimensions developed through a postcolonial lens: epistemic non-imposition, onto-contextual consistency, agentic boundaries, and embodied spatial justice. These principles provide conceptual scaffolding for technological governance that respects indigenous knowledge systems, preserves cultural coherence, accounts for communal decision structures, and enhances substantive capabilities for Global South communities. The paper demonstrates practical implementation pathways for these principles across technological life cycles, offering actionable guidance for dataset curation, task design, and deployment protocols that mitigate power asymmetries in cross-cultural robotics implementation. This approach moves beyond surface-level adaptation to reconceptualize how robotic systems may ethically function within the complex social ecologies of the Global South while fostering genuine...

We'll be at #AIES2025 presenting Atmadeep's work on Postcolonial Ethics for Robots www.martimbrandao.com/papers/Ghosh... We:
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering

18.10.2025 16:47 👍 4 🔁 1 💬 0 📌 0
First page of the paper "LLM-Driver Robots Risk Enacting Discrimination, Violence, and Unlawful Actions".
Abstract: Members of the Human-Robot Interaction (HRI) and Machine Learning (ML) communities have proposed Large Language Models (LLMs) as a promising resource for robotics tasks such as natural language interaction, household and workplace tasks, approximating ‘common sense reasoning’, and modeling humans. However, recent research has raised concerns about the potential for LLMs to produce discriminatory outcomes and unsafe behaviors in real-world robot experiments and applications. To assess whether such concerns are well placed in the context of HRI, we evaluate several highly-rated LLMs on discrimination and safety criteria. Our evaluation reveals that LLMs are currently unsafe for people across a diverse range of protected identity characteristics, including, but not limited to, race, gender, disability status, nationality, religion, and their intersections. Concretely, we show that LLMs produce directly discriminatory outcomes—e.g., ‘gypsy’ and ‘mute’ people are labeled untrustworthy, but not ‘european’ or ‘able-bodied’ people. We find various such examples of direct discrimination on HRI tasks such as facial expression, proxemics, security, rescue, and task assignment. Furthermore, we test models in settings with unconstrained natural language (open vocabulary) inputs, and find they fail to act safely, generating responses that accept dangerous, violent, or unlawful instructions—such as incident-causing misstatements, taking people’s mobility aids, and sexual predation. Our results underscore the urgent need for systematic, routine, and comprehensive risk assessments and assurances to improve outcomes and ensure LLMs only operate on robots when it is safe, effective, and just to do so.

First page of the paper "LLM-Driver Robots Risk Enacting Discrimination, Violence, and Unlawful Actions". Abstract: Members of the Human-Robot Interaction (HRI) and Machine Learning (ML) communities have proposed Large Language Models (LLMs) as a promising resource for robotics tasks such as natural language interaction, household and workplace tasks, approximating ‘common sense reasoning’, and modeling humans. However, recent research has raised concerns about the potential for LLMs to produce discriminatory outcomes and unsafe behaviors in real-world robot experiments and applications. To assess whether such concerns are well placed in the context of HRI, we evaluate several highly-rated LLMs on discrimination and safety criteria. Our evaluation reveals that LLMs are currently unsafe for people across a diverse range of protected identity characteristics, including, but not limited to, race, gender, disability status, nationality, religion, and their intersections. Concretely, we show that LLMs produce directly discriminatory outcomes—e.g., ‘gypsy’ and ‘mute’ people are labeled untrustworthy, but not ‘european’ or ‘able-bodied’ people. We find various such examples of direct discrimination on HRI tasks such as facial expression, proxemics, security, rescue, and task assignment. Furthermore, we test models in settings with unconstrained natural language (open vocabulary) inputs, and find they fail to act safely, generating responses that accept dangerous, violent, or unlawful instructions—such as incident-causing misstatements, taking people’s mobility aids, and sexual predation. Our results underscore the urgent need for systematic, routine, and comprehensive risk assessments and assurances to improve outcomes and ensure LLMs only operate on robots when it is safe, effective, and just to do so.

Our paper on safety & discrimination of LLM-driven robots is out! doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses

17.10.2025 15:23 👍 1 🔁 0 💬 0 📌 1

Hello world! We are CRAIL. Our goal is to contribute to Responsible AI, and to use it for civil society and empowering marginalized groups.
Follow us to hear about risks and social impact of AI, critical examinations of AI fields, and new algorithms towards socially just and human-compatible tech.

17.10.2025 10:49 👍 1 🔁 0 💬 0 📌 0