If nothing else, I hope this report provides a reminder that the deep problems posed by the use the use of biometric tech never went away, and won't go away without the active intervention of government.
(9/9)
If nothing else, I hope this report provides a reminder that the deep problems posed by the use the use of biometric tech never went away, and won't go away without the active intervention of government.
(9/9)
-- clear, concrete recommendations for how a government concerned about public safety, civil liberties and the rule of law might replace our fragmented, inadequate protections with a comprehensive, future-poof approach to managing this powerful technology.
(8/9)
-- an indispensable, up-to-date review of the current state of biometrics and its regulation in the UK; and
(7/9)
I'm therefore delighted that the Ada Lovelace Institute, and my colleagues @nualapolo.bsky.social and βͺ@mbirtwistle.bsky.socialβ¬ have published An Eye On The Future,
www.adalovelaceinstitute.org/report/an-ey...
which provides:
(6/9)
And around the world, increasingly authoritarian political tendencies should remind us of the power biometric tools can give governments (and corporations) over citizens, and the dangers of granting that power unconditionally. (5/9)
Meanwhile, private sector use of biometric categorisation technologies on members of the public, which exists in a de facto regulatory vacuum, has expanded invisibly but dramatically. (4/9)
In the UK, facial recognition is quietly being rolled out by the police, with pilots having given way to full-scale operational deployments and the once limited parameters for its use threatening to expand dramatically. (3/9)
Though they no longer dominate the news, biometric technologies like facial recognition, emotion prediction, & gait analysis, and the concerns they prompted -around ubiquitous surveillance, the categorisation of people, & democratic chilling -have only become more relevant. (2/9)
Amid all the interest and excitement around generative AI tools like Chat GPT and Gemini, you could be forgiven for having all but forgotten about its predecessor, discriminative AI -- and the previous poster child for AI controversy: biometrics. (1/9)
The AI Action Summit discussions laid out two paths. One with winners and losers, corporate interests prioritised and a race to an unclear finish line. The other path is nations working together to build a world where AI works for people and society.
π§΅ICYMI, here's some of our team's reflections.
Our briefing paper represents out first foray into this topic (to be followed over the course of the year with far more detail), setting out how Advanced AI Assistants work and the particular challenges they pose policymakers. (4/4)
They could also exert a huge amount of influence over how people act on, think about & relate to the world, they would require us to place a lot of trust in the hands of AI systems & their developers.
(Not to mention questions about the impact of these things on people's mental wellbeing)
(3/4)
AAIAs present a combination of ease of use & general usefulness that could make them v popular amongst the general public. (2/4)
In Advance of the Paris AI Summit, weβve (@adalovelaceinst.bsky.social) published a briefing on Advanced AI Assistants: Foundation model based systems that you talk to, & that are designed to guide or take action on the world
www.adalovelaceinstitute.org/policy-brief...
(1/4)
It's Bart versus Australia.
Please welcome the very excellent Ada Lovelace Institute @adalovelaceinst.bsky.social to BlueSky. Follow for the latest news and updates on our independent research into Data and AI and their impacts on people and society. And memes. Always memes.