Lawsuits when.
Haley Hubbard, a lawyer representing Heindel, said in a written statement that the open records law “allows citizens the right to know how much of a limited natural resource the county has committed to supply through an agreement that could last up to 70 years with one the world’s largest companies.” “It is disappointing that Dorchester County yielded to Google’s demands to keep the public in the dark about a long-term commitment of a precious public resource, especially since Google has conceded in similar litigation in Oregon that its trade secret argument has little merit,” he said. The Oregon case involved similar trade secret assertions that The Dalles — a small city east of Portland — made when a newspaper asked how much water Google was consuming at a data center in the area. In a settlement reached in December 2022, The Dalles agreed to provide 10 years of water-use figures and to continue to disclose them going forward.
Yeah this has been going on for years.
www.postandcourier.com/business/dor...
I have only just recently started digging into data centres and water, and the way companies are hiding vital information is beyond belief. It is so, so dodgy.
The entire project to widely popularise the idea these concerns are "fake" makes so much sense in this context
A big thank you to the @sustainableai.bsky.social lab for having me on to talk about my great big new report!!
Full video here: www.youtube.com/watch?v=PGIY...
And a clip below, about the under-discussed tension within climate / energy spaces on being pro or anti AI -->>>>
Also the playbook between tobacco and AI as well as petroleum is basically shared...
bsky.app/profile/oliv...
olivia.science/before
3/n
Long story short on relevant parts: tobacco industry jumped on "stress" to divert from cigs cause cancer, much like AI companies will inevitably do the same for psychosis or wtv to divert from the fact that their bots cause harm. No user is causing this.
& importantly: bsky.app/profile/oliv...
2/
Inevitably they will blame psychosis. And we've seen this before with companies and academics claiming lung cancer is caused by stress not smoking!
Remember Hans Eysenck? www.theguardian.com/science/2019...
> This research programme has led to one of the worst scientific scandals of all time
1/n
Old talk I thought I'd make public also with @samhforbes.bsky.social iirc
We have shirts!
store.dair-institure.org
You all are glowing 🌞🪻🌈
Prob but my brain is mush with migraine now haha so to even start these thoughts will kill me off 🫠
While other AI companies are still struggling to monetize #genAI, Google has found an obvious solution: Sell it to the military. Now, U.S. soldiers can unleash the power of AI for tasks such as «summarizing policy handbooks,» a task that will certainly prove vital on the battlefields of the future
PS:
1) For all pdfs see the little icon next to each paper here: olivia.science#publications
2) Some have video recordings (see relevant icon) too!
And 3) if you need anything specific to AI see: olivia.science/ai
OK! I collected much of what I @spookyachu.bsky.social @andreaeyleen.bsky.social (and other collaborators not on here) have said on the Turing test (from critical, gendered, etc. angles) as it keeps being relevant: olivia.science/turing — hope it's useful for others too. Happy Sunday! 🤖💭
I compiled a sort of follow up here olivia.science/before/ on the situation with ppl thinking we're in some special moment:
"We have certainly been here before. Many many times in the past, companies — just like artificial intelligence (AI) companies now — have lied to us to sell us products."
1/
Pygmalion Lens 1) Feminised form: Is the AI, by its (default or exclusive) external characteristics, portraying a hegemonically feminine character? Yes/No 2) Whitened form: Is the AI, by its (default or exclusive) external characteristics, portraying a character that is inherently white (supremacist), Western, Eurocentric, etc.? Yes/No 3) Dislocation from work: Does the AI displace women from a role or occupation, or people in general from a role or occupation that tends to be (coded as) women's work? Yes/No 4) Humanisation via feminisation: Are the AI's claims to intelligence, human-likeness or personhood contingent on stereotypical feminine traits or behaviours? Yes/No 5) Competition with women: Is the AI pit (rhetorically or otherwise) against women in ways that favour it, and which are harmful to women? Yes/No 6) Diminishment via false equivalence: Does the AI facilitate a rhetoric that deems women as not having full intellectual abilities, or as otherwise less deserving of personhood? Yes/No 7) Obfuscation of diversity: Does the AI, through displacement of specific groups of people, “neutralise” (i.e., whiten, masculinise) a role, vocation, or skill? Yes/No 8) Robot rights: Do the users and/or creators of the AI grant it (aspects of) legal personhood or human(-like) rights? Yes/No 9) Social bonding: Do the users and/or creators of the AI develop interpersonal-like relationships with it? Yes/No 10) Psychological service: Does the AI function to subserve and enhance the egos of its creators and/or users? Yes/No
overview if useful! olivia.science/ai/#pygmalion
Yes, Sarah!
"We have certainly been here before. Many many times in the past, companies — just like artificial intelligence (AI) companies now — have lied to us to sell us products."
olivia.science/before/
perplexity doing this also manages to erase human women computers yet again LMAO what fucking clowns
if you need to see what and why, see: olivia.science/ai/#pygmalion; full pdf here: doi.org/10.31235/osf...
Getting Past Past-Tense [ANNs] are not perfect: they are not really explainable, they are not pliable, i.e., they cannot be easily modified to correct any errors observed, and they are not efficient due to the overhead of decoding. In contrast, rule-based methods are more transparent to subject matter experts; they are amenable to having a human in the loop through intervention, manipulation and incorporation of domain knowledge; and further the resulting systems tend to be lightweight and fast. (Chiticariu et al. 2023, p. iii) In what is known in the literature as the past-tense debate (e.g., Elman et al., 1996; Pinker & Ullman, 2002), cognition and its underpinning substrates were discussed in terms of whether hard- wired capacities, such as grammatical rules for English past-tense formation, are encoded in the genes or otherwise without learning. Furthermore, claims were made about connectionist systems, such as, ANN “models cannot deal with languages such as Hebrew, where regular and irregular nouns are intermingled in the same phonological neighborhoods” (Pinker & Ullman, 2002, p. 459). While it may have been true for models at the time that certain data sets were unlearnable, or specific nondeep ANNs had limited learning abilities due to their architecture or training set or regimen, this both does not hold in the present day for certain data sets (discussed below) and continues to hold in the sense that there are data sets that are inaccessible to modeling endeavors using ANNs (see proof in van Rooij et al., 2024). Work such as Zhang et al. (2016, 2017) can serve to neutralize the claim that ANNs might struggle with certain unstructured data sets, for example, “where regular and irregular nouns are intermingled” (Pinker & Ullman, 2002, p. 459), by demonstrating that ANNs can learn utterly random mappings between inputs and outputs. Of course, such a finding about ANNs is also problematic to C-connectionists, who propose that in many cases similar input–output…
The relevant section is here on page 10 "getting past past-tense" see pdf here and it's not that long, but longer than extract below: olivia.science/doc/GuestMar...
Guest, O. & Martin, A. E. (2025). A Metatheory of Classical and Modern Connectionism. Psychological Review. doi.org/10.1037/rev0...
Yes! Source is also why tobacco companies hide their continued (so-called) scientific contributions to cancer research and source is why their research is banned in journals
"The AI and generally the technology industry is no different. Why should they be?"
olivia.science/before/
Yup!
> I forbid use of my words, code, or images for “training” their artificial neural network models. Such actions are in violation of the licenses outlined above as they require correct attribution.
olivia.science/license/
The saddest part is, we never properly dealt with tobacco and petroleum which is why we cannot with AI either...
olivia.science/before
bsky.app/profile/oliv...
some resources for cognitive scientists — especially for junior scholars who ask me wonderful questions and want to learn more — on theorising and metatheorising olivia.science/theory/ (not 100% finished & more to come, but all my work is freely available here: olivia.science#publications as usual)
Nice timing for @mjcrockett.bsky.social and my article on AI Surrogates and Illusions of Generalizability to be officially published. www.cell.com/trends/cogni...
The Trump administration’s escalating campaign in Iran marks the beginning of America’s first war in the age of large language models. These events make clear that those who work on AI safety must confront the limits of so-called “alignment to human values,” writes Eryk Salvaggio.
“In testing by CalMatters, they [the chatbots] often answered general questions correctly but struggled with more specific ones. East Los Angeles College’s bot couldn’t even correctly name its own president.”
I’m especially annoyed at @luxottica.bsky.social who made the RayBan frames for Meta glasses impossible to distinguish from regular glasses. Glad they’re being sued for this.
the harder the industry invests on pushing narratives of AI as (only) positive, inevitable, and inherently good for society/business, the more any criticism of this narrative becomes “too radical”, “unworkable” and “unrealistic”