I think a lot of this over indexes on the current composition of the current administration. I would be surprised if this all represents a broadly held belief about ai models
I think a lot of this over indexes on the current composition of the current administration. I would be surprised if this all represents a broadly held belief about ai models
Nice thanks for clarifying!
very cool!
Not to rain on the parade, but this is the same size as the OpenDV dataset right? Is the novel part the data? or perhaps that it is in europe?
Ooo peak design is legit
The more data you have, the better an embedding space you have, and the more likely your interpolation is to be correct. So you are right in that the something like the answer is probably in the training data, but you are wrong that the exact answer is in the training data or searched for.
Like many social media discussions, what is missing here is nuance. LLMs, like all generative no-prior ML models, are, effectively, interpolating. But in the case of LLMs, they are interpolating in the space of "next token embedding."
for the record, this is why LLMs have been more widely successful and applicable than, say, vision-language-action models, and why VLAs are catching up: this is a recipe that can be applied very broadly, but only works at a production level if the data domain is VERY thoroughly covered
Fundamentally you can *have* both, but functionally when you optimize for multiple objectives usually only one ends up as the primary. Guzdials article is suggesting that the prior push being so attached to undergrad outcomes is a bad primary objective for K-12 students, which is reasonable...
Le Chat underrated
well thats not great
I think a deeper difficulty in ML is the economy of attention. The hundreds of papers each day released on ArXiv in ML means that a reader needs to resort to heuristics to keep up. Stuff like trust a recommender system, or only read famous authors, or scan for buzzwords.
Sarah Paine is incredible
Given whats going on in the world, I think its time to reread Brave New World
Example, pre-train (reward free) to map temporal distances into distances in latent space, and then, finetune: map these through a dot product with a latent task description to a reward function.
A couple of refs:
openreview.net/forum?id=YGh...
arxiv.org/abs/2110.02719
arxiv.org/abs/2110.15191
I know exactly what you mean. Especially for us academic-related folks, our recommended bubble gets ultra tight. my recommendation is to look at some of the "highly followed" topics, which will give a more norm-y feed. But truly BlueSky needs "Trending"
Depending on precision, that is a crazy price for 2 high quality 6-dof robot arms, to say nothing of them attached as one torso. If the price stays when people start building it you can be sure I'll be one. The Rethink Baxter is a lesson, cumulative error from backlash will be the important thing
agreed
very exciting!
$14k open source humanoid robot upper torso. Writing with a pen on a notebook that you're holding is an impressively challenging task! Also comes with an open, modular, python software stack for robot control and planning.
openpyro-a1.github.io
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me
Begs the question: at what point is multi-task training implicit meta learning @chelseafinn.bsky.social
Congrats Andrew and Rich, well deserved!! apnews.com/article/turi...
One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies
super excited to try this out
Trying to tell the story behind this explosion of research we are in. An unexpected RL Renaissance.
New talk! Forecasting the Alpaca moment for reasoning models and why the new style of RL training is a far bigger deal than the emergence of RLHF.
YouTube: https://buff.ly/41bVRPp
Easier installation, faster PPO script, new tutorials. The team has put in so much work and I'm excited for y'all to try it.
github.com/Emerge-Lab/g...
Incredibly cool article. Why, in spite of all of the hype about the scale of learning, we shouldn't forget the second half of Sutton's Bitter Lesson: search scales too, and often better yellow-apartment-148.notion.site/AI-Search-Th...
(h/t klowrey)
"peter thiel backed" ๐๐๐ ded