Artificial intelligence didn’t used to be the obvious answer to every problem, notes @emoss.bsky.social. But AI approaches have moved into domains such as medicine, criminal justice, materials science, e-commerce, and more.
Why is this so?
Artificial intelligence didn’t used to be the obvious answer to every problem, notes @emoss.bsky.social. But AI approaches have moved into domains such as medicine, criminal justice, materials science, e-commerce, and more.
Why is this so?
New from me in @publicbooks.bsky.social ' Technology section: "Toward a Realpolitik of AI" reviews three excellent books by @bialski.bsky.social , @randomwalker.bsky.social and @sayash.bsky.social , and @garymarcus.bsky.social .
What in the Francis Galton is this?!
Economics
Such a necessary, timely, well-made argument that pulls focus back to where it should be: actual harms.
But also, putting Bayesian thinking in its appropriate place by pointing out how using Bayesian statistics is good when you can ground your expectations in actual reality and bad when you cannot.
I know fascists love columns, but I’d expect more than two of them to support this ridiculous claim.
The notion that you can pick and choose which fights to have with authoritarianism is an admission of defeat.
If it quacks like a goose and it steps like a goose...
Finish the sentence, please! And so you will do what?
File injunctions in court. Use your seat on the Senate Committee on Health, Education, Labor and Pensions to make withheld public health reports part of the congressional record. Demand the law be enforced. Complicity is consent. You’re a senator for the love of Pete.
What’s keeping members of Congress from requesting info from these agencies and entering it into the congressional record? Seems like a good use of oversight powers…
Even for the concern about security, it’s hard to imagine a way to train a completely secure model that doesn’t have significant overlaps in training data, given the promiscuity of current training approaches.
To be clear, I agree with both the premise and the conclusion but not sure how the one leads to the other here.
So if I understand this argument correctly, AI is inaccurate, insecure, and unaccountable. Therefore … the US military should build its own.
www.nytimes.com/2025/01/27/o...