a kazakh border guard aircraft rather far from the kazakh border, having left SCO en route to ESB at about 2026-03-09T03:22:00Z
i suspect this plane is not conducting a routine border patrol mission
a kazakh border guard aircraft rather far from the kazakh border, having left SCO en route to ESB at about 2026-03-09T03:22:00Z
i suspect this plane is not conducting a routine border patrol mission
Never a big Narnia guy but this 6 seconds typifies everything I love about British television.
you get: mad max war vehicle
i get: god's magical delicious protein factories that survive on garbage, bugs, and spite for all living things after they were knocked off their apex predator throne long long ago by a screaming death rock from the sky
how does one invest in a literal stock of toyota trucks suitable for turning into technicals, that one expects to barter for chickens and dried beans
on one my cheap AF year+ long puts on AI stocks i got expecting enough chaos to make the options versions of penny stocks return a small amount seem _much_ more likely to turn a profit now
on the other hand i have... not much hope for the strength of the USD, or uh... anything, really
where are you getting meat tortillas and meat salsa
LETS TROT
honestly the one of these near me was okay. could drop in and get a variety of quick takeaway food
but they closed it. local takeaway traffic was an afterthought; they found a better location for their main business, piling multiple orders into burrito taxis bound for wealthier neighborhoods
<some company> spent ages encouraging an internet full of <content that's mostly fluff but pleases the ranking algorithm> and then <some company> lobbed a molotov or two into that same bizarre, unnatural, and on the brink landscape--dry fluff tinder and a few remaining ancient redwoods
whoopsy-doo!
this was... not what i was expecting from the story, tbh
fun quote of the day: "... [i have] always had the impression of being in the hole of a donut" "что был я в дырке от бублика" --Viktor Shklovsky, who i prior sorta knew (through the work, only now by name) as the writer for the excellent "Bed and Sofa"
(and while it is _entirely_ removed from most of the domains discussed here, i am reminded of www.joshuayaffa.com/betweentwofi..., as analysis of a lot of parallel "what do you do when you're motivated to do good in what oft seems a hopeless, impossible to rescue situation?" cases)
i lament that growth-at-any-cost produc manager sorts exist, and that markets enable and encourage them, but i would be remiss to just do nothing when i run up against them in areas where i have expertise and can advocate for people i know their decisions will hurt
the former's nihilism, and the latter's, well, measured, but maybe not pragmatic when you know there are forces that are less driven by altruism at work
there are cases where they clearly dropped the ball, famously www.thehastingscenter.org/facebooks-em...
i would argue, however, that the "it's too hard, don't try, or wait until you're really, really sure" approaches ain't good either
idk how they measure and test such--ideally they _are_ putting in the effort to study effects--A/B testing in test markets and comparing against controls to confirm that resource use increased, and positive interactions with those resources increased--and submitting those findings for peer review
and if, statistically, the on-target helpful instances are most common, that's good, and better than their having done _nothing_
some cases you can't help. some cases you may hurt. but if you're helping in a lot of cases where you were previously doing nothing, you probably _do_ do something
so in thinking about the designers of such systems, i can understand their aiming for a tradeoff where they're gonna have some automated responses that are way off, some that are on-target but wind up being harmful, and some that are on-target and helpful
but at the same time those systems firing off false positives in edge cases _probably_ mean they also fired in a lot of true positive cases that _i_ don't see because im not the person who should see them, but i do know that there are people that should
ditto hitting SAMSHA warnings in search when querying things that... don't make sense to serve that warning to. the actual population at risk of opioid use disorder ain't searching for "opioid SAR" in the en.wikipedia.org/wiki/Structu... sense, because you don't buy fentanyl analogues off pubmed
i feel kinda the same way when i run into bad automated sentiment detection moderation ("it looks like you're saying something mean on a twitch stream! maybe don't!" when it's something _the streamer_ just said, that i am repeating in chat, with both of us understanding that it's tongue in cheek)
the ideal route of the private entities moving slowly and cautiously just... didn't happen, and was never likely to have happened--broader societal conditions do not encourage such
that is, in turn, itself a problem, but it's a problem we already had and still have!
so between private for-profit entities throwing shit at the wall to see what sticks, damned if they make a mess along the way, and policymakers letting them run wild OR policymakers at least starting the process about building _something_ idk, i guess the second?
yeah i would rather both be done slowly and carefully, but eyyyyyy late-stage capitalism race to the finish or smth
a russian language meme that translates to WEAK AURA someone that unfollowed because they don't like robot voice STRONG AURA someone that unfollowed because they don't like robot voice (in the vocaloid sense, afaict)
a human resource isn't necessarily what everyone wants or needs, but it's arguably better to mention they're available
inherently, the AI bots can't provide human support, and some (especially younger) people reaching out to AI bots in crisis may not know what human resources are available
yeah they're gonna screw up and write bad law, and it will need to be reworked. they likely know that themselves, but also know it's worth starting those conversations/starting the legislative and policy research to build the groundwork for better law in the future
realistically policymakers are gonna do _something_ re trying to close a barn door after the horses have already escaped
negative outcomes from people in crisis interacting with chatbots are already here, and civil action by surviving family is already underway, as in www.theguardian.com/technology/2... and www.cnn.com/2025/11/06/u...
good thread. i feel like legislators and regulators are stuck in a "damned if you do, damned if you don't" situation here, since commercial entities are... not really doing due diligence, at all, because YOLO new paradigm ship the products, worry about the consequences later