Yes. Lately google AI just sends me to low quality youtube videos from engagement miners. Those videos are out of date and skip complex cases.
Yes. Lately google AI just sends me to low quality youtube videos from engagement miners. Those videos are out of date and skip complex cases.
Sobering views from Phillips O'Brien regarding the Iran War. His assessment is that US participation is the result of government corruption.
open.substack.com/pub/phillips...
Excellent editorial
Excellent news!
Leibniz, looking at the universe: "Why is there something instead of nothing?"
Me, looking at my Outlook calendar: same
I try to figure out how I could have written the paper. What questions did I fail to ask? What experiment did I not conceive? What can I learn from them?
In those days, the sense was that passing the Turing Test would be a useless activity. But I don't think many people realized that applying machine learning to build AI systems was fundamentally about mimicry. At least I, as an ML researcher, didn't appreciate this. So obvious in retrospect!
John Searle's Chinese Room paper (1980) set out to show that passing the Turing Test would tell us nothing about language understanding. The Loebner Prize (1991) also demonstrated that mimicry was easy but did not provide any improvements in capabilities.
Original source?
Some day (if history is any guide), we will learn how accurately these systems conformed to the laws of war (Geneva Conventions). And if history is any guide, no one will be held accountable for the failures of these systems. We don't have the information to judge right now.
The Turing Test has been criticized within the AI community for many years. It rewards mimicry rather than high-quality systematic performance. We now have systems trained to be excellent mimics. And they do not exhibit good systematic performance. The Turing Test was a terrible mistake.
I expected them to eventually advocate divorce to create a quiet sleep environment.
My solution is regular ear cleaning visits.
I stand up for science, but what exactly are "facts"? I know nuance is tough in politics, but the whole point of science is that it is a continual effort to find the truth. Its claims are supported by evidence and those will change if the evidence changes. "Our best current understanding" != Facts
Poor wording on my part. I meant an author who submits a paper that is rejected as not suitable for arXiv. That would usually be a paper that failed to make a claim or provide evidence, a paper that was LLM slop, and so on.
Yes! I would love google scholar to remove the counts and h-index information. Semantic Scholar had an interesting approach with their "influential citation" work. But I believe that is no longer being actively maintained
To address this, there should also be a penalty for endorsing a fake or low-quality user. Each endorser is declaring that "I know this person, they are a real person, and they are a real researcher".
A problem that I'd like people to consider is fake (sock puppet) authors on arXiv submitting papers to boost the citation counts of authors. ArXiv recently tightened its endorsement process, but one fake author who "gets through" can create and endorse many more accounts.
Julian makes many excellent points. I have been opposed to anonymous submissions from the start; they are too open to abuse. His idea that we should create many smaller, more specialized meetings is interesting.
The moral panic is to claim that any method for certifying age online must give up your biometrics. Beyond age, we also need to certify that we are humans (and replace those captchas that are now easier for AIs to solve than for people), that we are registered to vote, that we are citizens etc.
I'm not sure I count as a leftist, but I think we need age verification for some things both offline (driving, drinking) and online. There are cryptographic methods that can certify age without revealing any other information about a person. We need to adopt those methods.
This whole incident raises difficult questions. We know these models need guardrails. Guardrails implemented by RL or fine tuning are not modular. Who decides which guardrails to implement? Can they be made switchable? The failure of unlearning suggests not.
According to this story, it was the government that initiated the contract discussion with OpenAI.
The real story appears to be more complex, according to the NYTimes
www.nytimes.com/2026/03/01/t...
I just meant that the grant goes to the university which passes it to the professor who hires the students. Quite different from the Canadian system where many students get direct government support
The frustrating thing about this is, agencies have used automated targeting filters and machine learning for MASINT and scenario planning for many years. LLMs arenβt necessarily a huge analysis leap except now they are making command decisions. That is INSANE and Anthropic is right to balk at it.
Alphabet/Waymo vs Amazon/Zoox roboclot:
20th St by Lexington, Mission, San Francisco
If their respective remote human assistants could communicate directly this would be faster and surer/safer.
OP: tiktok.justjimmynajera
Done!
Gather 'round Bluesky, while I tell the hoary tale of epidemic vs. endemic.
How can it be that pandemic interventions that were vitally important in 2020 are marginally effective in 2025?
Science will give us the answers!
Follow me...
1/
Indirectly, it mostly goes to support graduate students
Yes. National Science Foundation (US). Funds primarily basic research in math, physics, engineering, social sciences (at least pre-Trump), education, and biology.