AI Accountability Lab 's Avatar

AI Accountability Lab

@aial.ie

Trinity College Dublin’s Artificial Intelligence Accountability Lab (https://aial.ie/) is founded & led by Dr Abeba Birhane. The lab studies AI technologies & their downstream societal impact with the aim of fostering a greater ecology of AI accountability

3,023
Followers
43
Following
28
Posts
14.11.2024
Joined
Posts Following

Latest posts by AI Accountability Lab @aial.ie

GPAI Training Transparency

The team (Dick Blankvoort, @harshp.com, & Maximilian Gahntz) will be presenting this work at FAccT 🎉 in June, and you can access the preprint and analysis here: aial.ie/research/gpa...

If you have feedback and/or are interested to collaborate with them, please reach out to them.

end/

05.03.2026 18:32 👍 7 🔁 1 💬 0 📌 0
Preview
How Big AI Developers are Skirting a Mandate for Training Data Transparency We need better visibility into what data AI developers are using to train their models, write Dick Blankvoort, Harshvardhan Pandit, and Maximilian Gahntz.

This work is timely and is already being covered by media:

www.techpolicy.press/how-big-ai-d...

15/

05.03.2026 18:23 👍 17 🔁 10 💬 1 📌 0
Post image

Despite their declared assurances or signing of codes of conduct/practice, no big provider has provided a summary. Only 4 providers have explicitly done so – and all are small orgs or open source developers. This sinks arguments against the obligation being burdensome or excessive.

2/

05.03.2026 18:08 👍 19 🔁 7 💬 1 📌 0
Post image

New paper from team @aial.ie! aial.ie/research/gpa...

EU's AI Act Article 53(1)(d) is an obligation for GPAI model providers to publicly provide a 'summary' on their model’s training data. The team assessed published summaries along 6 dimensions & found that all big providers failed on all 6.

1/

05.03.2026 18:04 👍 114 🔁 69 💬 2 📌 3
AI Accountability Lab Stewarding a greater ecology of accountability in the age of AI

As Director of @aial.ie at ADAPT & @tcddublin.bsky.social, Abeba and her team have since produced influential research on #surveillance practices, secured #EU funding to develop audit frameworks, and contributed to both national and #global #AI policy discussions. More about the lab here: aial.ie

02.03.2026 12:11 👍 7 🔁 3 💬 0 📌 0
Post image

Dr Abeba Birhane @abeba.bsky.social @aial.ie, ADAPT & @tcddublin.bsky.social, features in @researchireland.ie's inaugural strategy: Curiosity, Capability, Competitiveness – Charting Ireland’s Research and Innovation Future 2026–2030.

📌 Read the strategy here: www.researchireland.ie/news/researc...

02.03.2026 12:09 👍 10 🔁 8 💬 1 📌 0

today we had a class of young students (12-14 yr old) came to visit the @aial.ie on campus. they were so keen to learn about AI and most importantly I was blown away by the type of questions they asked us: water consumption of data centers, how openai makes money, why RAM prices keep going up, ...

25.02.2026 21:09 👍 114 🔁 20 💬 9 📌 0
Preview
AI-produced material online threatens to ‘erode the foundations of democratic life’ Director of Trinity College Dublin’s AI Accountability Lab says tools such as ChatGPT and Grok are a ‘social disaster’

well, this was quick www.irishtimes.com/politics/202...

17.02.2026 18:23 👍 47 🔁 20 💬 1 📌 2
Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping
information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests.

The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood.

An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed.

GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth.

This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests. The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood. An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed. GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth. This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those
building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures.

An internal Meta memo outlining plans to launch facial recognition in smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted.

Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement.

Given the track record of tech giants, stricter regulation and enforcement is not “anti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures. An internal Meta memo outlining plans to launch facial recognition in smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted. Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement. Given the track record of tech giants, stricter regulation and enforcement is not “anti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

I appeared as an expert witness before the Joint Committee on AI at the Houses of Oireachtas (parliament of Ireland) to discuss "AI: truth and democracy" this morning. You can read my opening statement here: www.oireachtas.ie/en/publicati...

17.02.2026 15:01 👍 158 🔁 67 💬 6 📌 5
Blue to green gradient graphic with headshots of the speakers for the event, and text reading: "Technomoral Conversations: What's the Story with AI? Exploring AI Narratives. Join us on 11 February at 6pm in Edinburgh & online, where we will hear from Alex Taylor (University of Edinburgh), Abeba Birhane (Trinity College Dublin), Louise Amoore (Durham University) and John Thornhill (Financial Times)." There are logos for EFI, CTMF and BRAID, the co-organisers of the event.

Blue to green gradient graphic with headshots of the speakers for the event, and text reading: "Technomoral Conversations: What's the Story with AI? Exploring AI Narratives. Join us on 11 February at 6pm in Edinburgh & online, where we will hear from Alex Taylor (University of Edinburgh), Abeba Birhane (Trinity College Dublin), Louise Amoore (Durham University) and John Thornhill (Financial Times)." There are logos for EFI, CTMF and BRAID, the co-organisers of the event.

During her visit, @abeba.bsky.social will be taking part in our Technomoral Conversations event exploring AI narratives and counter-narratives – a collaboration w/ @braiduk.bsky.social & @edfuturesinstitute.bsky.social

🗓️ 11 Feb 18.00-19.30
📍 Edinburgh Futures Institute & online
🎟️ edin.ac/3MZEm0a

27.01.2026 12:33 👍 13 🔁 11 💬 0 📌 0
Headshot photo of Dr Abeba Birhane in front of a bright blue background. She has long black hair, worn in braids. She is wearing glasses and a silky, pale grey dress shirt. Text reads: Dr Abeba Birhane, Trinity College Dublin.

Headshot photo of Dr Abeba Birhane in front of a bright blue background. She has long black hair, worn in braids. She is wearing glasses and a silky, pale grey dress shirt. Text reads: Dr Abeba Birhane, Trinity College Dublin.

This February, we look forward to hosting @abeba.bsky.social for one week as our Distinguished Visiting Scholar!

Dr Birhane founded and leads @aial.ie and is assistant professor of AI @tcddublin.bsky.social. She researches AI accountability with a focus on audits of AI models and training datasets.

27.01.2026 12:30 👍 20 🔁 4 💬 1 📌 1

if you’re passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you

21.01.2026 16:25 👍 45 🔁 40 💬 1 📌 0
Main Responsibilities
Operational and Financial Administration:
• Coordinate the day-to-day operational and administrative activities of the AI Accountability Lab.
• Oversee lab budgets and financial planning for research grants and cost centres; monitor expenditure and prepare financial reports for the PI and Faculty Finance Office.
• Ensure adherence to Trinity and funder financial policies and procurement procedures.
• Maintain internal administrative systems, documentation, and records to support efficient project delivery.
• Liaise with central Finance, the Research Development Office, and external funding bodies on budgetary and governance matters.
• Coordinate HR administrative processes for the lab, including recruitment, onboarding, contract administration, and research staff extensions in liaison with central HR and the Research Office.
• Create and maintain shared resources, calendars, and internal documentation.
• Coordinate relationships and collaboration with other research centres, government bodies, policy makers, civil society partners, and media as required by the research team.
• Plan and schedule internal lab meetings, reading groups, invited speakers, collaborative work sessions, and other events as required.

Main Responsibilities Operational and Financial Administration: • Coordinate the day-to-day operational and administrative activities of the AI Accountability Lab. • Oversee lab budgets and financial planning for research grants and cost centres; monitor expenditure and prepare financial reports for the PI and Faculty Finance Office. • Ensure adherence to Trinity and funder financial policies and procurement procedures. • Maintain internal administrative systems, documentation, and records to support efficient project delivery. • Liaise with central Finance, the Research Development Office, and external funding bodies on budgetary and governance matters. • Coordinate HR administrative processes for the lab, including recruitment, onboarding, contract administration, and research staff extensions in liaison with central HR and the Research Office. • Create and maintain shared resources, calendars, and internal documentation. • Coordinate relationships and collaboration with other research centres, government bodies, policy makers, civil society partners, and media as required by the research team. • Plan and schedule internal lab meetings, reading groups, invited speakers, collaborative work sessions, and other events as required.

Research Coordination and Governance:
• Track project timelines, deliverables, reporting schedules, and ethical compliance across multiple funded projects.
• Ensure research activities meet Trinity and funder governance requirements (e.g., ethics, data management, GDPR, and reporting).
• Coordinate workshops, seminars, and collaborative events with research centres and external partners as required.
• Maintain shared documentation and project records to support effective collaboration.
• Ensure that research outputs comply with ethical and scientific standards and satisfy the terms and conditions of relevant funding bodies.

Research Coordination and Governance: • Track project timelines, deliverables, reporting schedules, and ethical compliance across multiple funded projects. • Ensure research activities meet Trinity and funder governance requirements (e.g., ethics, data management, GDPR, and reporting). • Coordinate workshops, seminars, and collaborative events with research centres and external partners as required. • Maintain shared documentation and project records to support effective collaboration. • Ensure that research outputs comply with ethical and scientific standards and satisfy the terms and conditions of relevant funding bodies.

Communications and External Engagement
• Coordinate internal and external communications for the lab, including its website, newsletters, and social media presence.
• Liaise with policy makers, media, and stakeholder organisations on behalf of the lab, in consultation with the PI.
• Support the dissemination of research outputs, such as academic publications, policy briefs, and public events in collaboration with ADAPT’s and TCD’s communications staff.
• Work with Trinity Communications to prepare press releases and highlight lab achievements.
• Representing the lab externally with media if required.
Planning and Cross-Centre Coordination
• Support the PI in delivering AIAL’s annual operational plan and reporting on progress against objectives.
• Coordinate activities across multiple projects and partnerships to ensure consistency and alignment with institutional priorities.
• Contribute to process improvements that strengthen the lab’s administrative and operational framework.
• Act as a key contact for internal governance reviews, audits, and funder compliance checks.
Any other duties or responsibilities as assigned by the PI or their delegate in support of the effective operation of the AI Accountability Lab and its objectives.

Communications and External Engagement • Coordinate internal and external communications for the lab, including its website, newsletters, and social media presence. • Liaise with policy makers, media, and stakeholder organisations on behalf of the lab, in consultation with the PI. • Support the dissemination of research outputs, such as academic publications, policy briefs, and public events in collaboration with ADAPT’s and TCD’s communications staff. • Work with Trinity Communications to prepare press releases and highlight lab achievements. • Representing the lab externally with media if required. Planning and Cross-Centre Coordination • Support the PI in delivering AIAL’s annual operational plan and reporting on progress against objectives. • Coordinate activities across multiple projects and partnerships to ensure consistency and alignment with institutional priorities. • Contribute to process improvements that strengthen the lab’s administrative and operational framework. • Act as a key contact for internal governance reviews, audits, and funder compliance checks. Any other duties or responsibilities as assigned by the PI or their delegate in support of the effective operation of the AI Accountability Lab and its objectives.

I’m looking for my right-hand person to come help me run the @aial.ie

- Job Title: Lab Coordinator, AI Accountability Lab (0.8 FTE)
- Pay Scale: (€58,999 - €69,325 per annum pro-rata)
- Closing Date: 11-Feb-2026 12:00

Apply here: my.corehr.com/pls/trrecrui...

Main Responsibilities👇🏾

21.01.2026 11:17 👍 64 🔁 67 💬 4 📌 6

please apply and share with your network

21.01.2026 15:30 👍 9 🔁 8 💬 0 📌 0

I am assembling resources for @aial.ie to mitigate/reduce risks (due to our research) from potential:

1)retaliation, defamation lawsuit etc for work on politically charged topics

2)emotional harm from dealing with sensitive issues (CSAM, hate, etc)

know of any helpful resources? pls share/repost

20.01.2026 20:22 👍 51 🔁 32 💬 9 📌 1

also, i'm looking for a phd student to join our team at the @aial.ie

bsky.app/profile/abeb...

22.12.2025 17:02 👍 13 🔁 8 💬 0 📌 0
About the PhD

Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you disgruntled by the current safety evaluation landscape? curious about what conceptual clarity, methodological soundness and rigour in AI evaluation might look like? if so, consider coming to dublin and doing a phd with me

apply here: aial.ie/hiring/phd-a...

17.12.2025 19:33 👍 79 🔁 54 💬 2 📌 3

i'm hiring a postdoc to map the age-verification space and do some audits

if you’ve worked in this area, pls apply: aial.ie/hiring/postd...

pls share widely

17.12.2025 19:00 👍 36 🔁 45 💬 0 📌 0
About the PhD

Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

Are you passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like? Come do a PhD with us.

Closing Date: 10 February 2026

Apply here aial.ie/hiring/phd-a...

17.12.2025 18:52 👍 16 🔁 10 💬 0 📌 0
About the role

Age verification systems are increasingly being deployed in ways that rely on inferring the age or age range of the user from data such as live selfies or conversational analysis. These systems pose challenges regarding accuracy, bias, privacy, and most importantly raise questions around the scientific legitimacy and methodological soundness underlying the very process. Furthermore, commercially deployed age-verification systems and processes operate in non-transparent ways with no recourse from errors and deficiencies. Currently, information on where, how, and when such technologies are being developed and deployed is extremely scarce, as well as which actors are becoming established as market leaders and key vendors. The AIAL seeks to understand this challenge better through high-quality, high-impact research that helps uphold the principles of scientific validity, methodological rigour, transparency, accountability, and equity.

To directly tackle this urgent problem, the AIAL is looking for a highly driven Post-Doctoral Researcher who can identify and map the current state of commercially used age verification systems, the techniques being used, and the key actors in the development and deployment pipeline. Based on publicly available information, the researcher will closely replicate the techniques used in commercial age-verification systems and undertake socio-technical audits with a focus on assessing the scientific legitimacy, epistemic and methodological validity of the overall approach as well as assessing the accuracy, discrimination, and privacy risks.

The output of this work is expected to contribute to advancing the state of the art in accountable governance and responsible practices, and will support the application of relevant laws such as the GDPR and the AI Act. Ultimately and in the best case scenario, insights from this work would feed legal or other mechanisms to gain access to real-world deployed age-verification systems.

About the role Age verification systems are increasingly being deployed in ways that rely on inferring the age or age range of the user from data such as live selfies or conversational analysis. These systems pose challenges regarding accuracy, bias, privacy, and most importantly raise questions around the scientific legitimacy and methodological soundness underlying the very process. Furthermore, commercially deployed age-verification systems and processes operate in non-transparent ways with no recourse from errors and deficiencies. Currently, information on where, how, and when such technologies are being developed and deployed is extremely scarce, as well as which actors are becoming established as market leaders and key vendors. The AIAL seeks to understand this challenge better through high-quality, high-impact research that helps uphold the principles of scientific validity, methodological rigour, transparency, accountability, and equity. To directly tackle this urgent problem, the AIAL is looking for a highly driven Post-Doctoral Researcher who can identify and map the current state of commercially used age verification systems, the techniques being used, and the key actors in the development and deployment pipeline. Based on publicly available information, the researcher will closely replicate the techniques used in commercial age-verification systems and undertake socio-technical audits with a focus on assessing the scientific legitimacy, epistemic and methodological validity of the overall approach as well as assessing the accuracy, discrimination, and privacy risks. The output of this work is expected to contribute to advancing the state of the art in accountable governance and responsible practices, and will support the application of relevant laws such as the GDPR and the AI Act. Ultimately and in the best case scenario, insights from this work would feed legal or other mechanisms to gain access to real-world deployed age-verification systems.

We're hiring a postdoc to identify, map and evaluate commercially used age verification systems

Closing Date: 16 January 2026

Apply here: aial.ie/hiring/postd...

17.12.2025 18:47 👍 21 🔁 13 💬 0 📌 1
Preview
UK is running out of water - but data centres refuse to say how much they use One government insider said 'accurate water figures have historically been very hard to get from facilities of any size'

Why has the UK Govt's Environment Agency put its name to a dubious report produced by Big Tech lobbyists TechUK? inews.co.uk/news/uk-runn...

01.12.2025 08:43 👍 5 🔁 3 💬 0 📌 1
Post image

The AI Accountability Lab @aial.ie at @tcddublin.bsky.social & ADAPT secured a major UK grant to investigate potential risks of #AICompanions. The lab is led by Prof. @abeba.bsky.social - learn more: www.adaptcentre.ie/news-and-eve...
@researchireland.ie

28.11.2025 13:37 👍 9 🔁 2 💬 0 📌 0

at the @aial.ie, we don't audit AI systems for companies/startups on demand nor do we provide any kind of consultation services as it compromises our independence

we also don't have a dedicated person to attend the lab's inbox at the mo and unfortunately I'm unable to respond to such emails

22.11.2025 17:07 👍 134 🔁 16 💬 3 📌 1
Post image

Delighted to share that Dr Abeba Birhane, of @tcddublin.bsky.social SCSS, ADAPT & founder of @aial.ie, has been named one of Irish Tatler's 2025 Women of the Year in the Innovation category! Congratulations, @abeba.bsky.social!
www.adaptcentre.ie/news-and-eve... @researchireland.ie

19.11.2025 12:26 👍 32 🔁 6 💬 0 📌 0

Google's assault on the news industry is also an assault on the truth. Why are regulators so slow to act?

14.11.2025 17:20 👍 2 🔁 2 💬 0 📌 0

Data centres in Scotland are being allowed to call themselves "green" despite using vast amounts of electricity and causing high levels of climate pollution.

17.11.2025 09:07 👍 5 🔁 2 💬 0 📌 0

"data centers already accounted for 22% of Ireland's total electricity consumption in 2024. In the Dublin/Meath area, where a third of Ireland's population lives, 48% of the electricity was used by data centers in 2023." @abeba.bsky.social & @krisshrishak.bsky.social

✍🏼 www.iccl.ie/press-releas...

17.11.2025 14:24 👍 118 🔁 67 💬 1 📌 3
Preview
AI Hype Is Steering EU Policy Off Course | TechPolicy.Press Kris Shrishak and Abeba Birhane say policymakers should stop peddling in unscientific discourse about "AGI" and "superintelligence."

In a short piece for @techpolicypress.bsky.social, @abeba.bsky.social and I write #AIHype Is Steering EU Policy Off Course.

Stop peddling in unscientific discourse about “AGI” and “superintelligence.” Serve citizens. Don't cater to the whims of tech CEOs.

www.techpolicy.press/ai-hype-is-s...

17.11.2025 14:09 👍 101 🔁 52 💬 2 📌 7
Preview
AI Hype Is Steering EU Policy Off Course | TechPolicy.Press Kris Shrishak and Abeba Birhane say policymakers should stop peddling in unscientific discourse about "AGI" and "superintelligence."

We need our policymakers everywhere to be attuned to the needs of the people, not the whims of CEOs -- and especially not CEOs in the grip of TESCREAList fantasies.

Thank you @abeba.bsky.social and @krisshrishak.bsky.social for speaking up

www.techpolicy.press/ai-hype-is-s...

17.11.2025 14:41 👍 100 🔁 39 💬 2 📌 3
Preview
AI Hype Is Steering EU Policy Off Course | TechPolicy.Press Kris Shrishak and Abeba Birhane say policymakers should stop peddling in unscientific discourse about "AGI" and "superintelligence."

Europe’s policy makers continue to echo corporate hype and tech CEOs' speculations about “AGI” and “superintelligence”. In this piece, @krisshrishak.bsky.social & I call on our policy makers to be grounded in empirical evidence

www.techpolicy.press/ai-hype-is-s...

17.11.2025 19:27 👍 33 🔁 14 💬 1 📌 1