Every word of this
spectrum.ieee.org/amp/age-veri...
@missiggeek
She/her. Aka Miss IG Geek on other socials. Humanity in data, digitech-ethics, and misanthropology, occasional puffins. #ActuallyAutistic π³οΈβπ Everything you know is actually way more complicated than you think
Just a reminder that governments can invalidate or destroy the things you rely on to be seen as a valid, legal participant in society pretty much at will. So if you think youβre good bc youβre not trans or not in Kansas or whatever youβre lying to yourself while danger gets closer to your community.
Itβs my own observation but itβs also a βthin end of the wedgeβ extrapolation from this arxiv.org/abs/2001.05046
Whatever the Metβs paperwork might say, this is *not* a legitimate or lawful use case for facerec tech. It is, in fact, exactly the sort of thing data protection law was conceived to prevent, because the *only* purpose for this use case is abuse.
Not βto find convicted criminals whoβve evaded incarcerationβ or βto identify people doing crimes as they happenβ but to attach names and profiles to *everyone* who passes the camera.
Which is monstrously excessive, insanely intrusive, expensively dysfunctional and mindlessly authoritarian.
Oh FFS
A black and white tuxedo cat with slightly ragged ears, white paws and bib sits up on the corner of a bed looking adorable
Had me some Timmy time this week, I shall be sad to leave him but awfully relieved to be home again
None of that is necessarily intentional or deliberately nefarious - itβs just how humans roll. But to pretend itβs legitimate to install algorithmic judgment tools in a workplace without radical measures to detoxify its power dynamics, is irresponsible and naive.
If a person defers to the algorithm, the algorithm can be blamed for adverse outcomes and accountability is diffused to the point of evaporation. However, if a person does not defer to the algorithm they alone will carry the burden of responsibility for any adverse outcomes.
βTrainingβ is also ineffective as a countermeasure for automation bias, because the issue isnβt just recognisance of erroneous output, it also arises from the power dynamics of credibility and authority to make independent decisions within the organisation.
Policy documents are no defence against automation bias in a workplace that discourages challenge or critique. For these tools to be genuinely assistive rather than directive, it must be safe to refuse their βassistanceβ without punishment for doing so over-cautiously
A worker who accepts algorithmic output despite it being self-evidently wrong is going to be given more grace by management than a worker who refuses an algorithmic output which is ambiguous or unreliable - itβs just easier and safer for the worker to refrain from contradicting the machine
Social biases coded into algorithmic systems is exacerbated by humansβ willingness to defer to the systemβs outputs even when they are contradicted by the evidence of their own eyes, because nodding along with the machine is the path of least resistance.
Iβm old enough that my birth mother had to wear a leg brace for walking because of the damage that polio left her with.
Image from the movie β28 Days Laterβ: a man dressed in hospital scrubs ascends a darkened stairway. On the wall behind him βRepent, the end is extremely fucking nighβ is daubed in black paint
Innit
I speak to Siri in machine-like language βhey Siri, set timer, 15 minutesβ because I prefer to reserve the effort of courtesy for human beings who might actually benefit from it and AFAICT, treating software like a person leads to systemic dehumanisation rather than consideration for humanity.
It was inevitable - and whatβs really sad is that the only effective protection against it is to use LLMs to leach oneβs writing of any originality or character before publishing. Thus have the authoritarians of the world effectively lobotomised dissidence, protest and critique. Ugh.
Itβs an optimistic theory but education alone is not sufficient defence against entire systems that have been designed to be manipulative, divisive and addictive - together with platform regulation though, it would be more effective than the banhammer
A sturdy 2 year old black Labrador curled up on a wood veneer floor seen from above
But in the meantime - itβs okay to turn off/mute the news and seek positive interactions instead!
Brain bleach: here is my fur nephew Murphy all tuckered out after a long beach walk
Our organic capacity for compassion has not evolved for the range and density of connection that the technological age brought with it. Too many Others, not enough space, time or energy to spare for the effort of humanising them. Itβs not going to end well.
The Internet is not safe for kids because the *world* is not safe for kids. But we donβt keep kids locked in sterile isolation chambers until they turn 18.
All the dangers of social media are inherent to the business models of social media. Thatβs the issue that needs addressing.
Weβve already ended up with a legion of Facebook-radicalised boomers whoβve had their intellectual faculties and emotional regulation wrecked by the algorithmic hate engines of ad-tech - evidently if weβre banning kids from social media we should be banning adults too!
Education, safe reporting spaces and massive investment in skilled human moderation would far better equip kids to deal with the day they turn 18 and their βprotectionsβ evaporate, leaving them naive and defenceless in the hostile environment of surveillance capitalism.
βHe took my swag sack!β Cries man in striped top and domino mask.
This isnβt a tech problem that can be solved with more tech, itβs a governance problem in which throughput metrics substitute for values and conscientiousness is anathema to βreturn on investmentβ
Well, yeah. Thatβs what you get when you replace thoughtful human effort with stochastic parrots. Itβs a foreseeable outcome of reducing every workplace to a generic sausage-making model in which purposes and outcomes are sacrificed to the motions of βproductivityβ
Has it occurred to any of these geniuses that the parents may be the reason teens are searching for self-harm and suicide content?
This is a terrible idea.
Yup
Humans routinely over-estimate their rationality, mistaking the *potential* (and limited) capacity to reason before acting for a default setting to which reflex, habit and vibes are the exception.
βThink of the children!β is the reddest of red flags for hypocritical authoritarianism and this example just takes the cake.