upload.wikimedia.org/wikipedia/en...
upload.wikimedia.org/wikipedia/en...
But, as Iβve heard from others, the AIs often suggest possible connections, related results, or avenues to pursue that hadnβt occured to me. Unfortunately, these are usually dead ends.
Iβve found that the pro-version AIs are great for proving known theorems or new theorems that could be considered homework problems, but so far I have had no success using them to solve truly open/novel/challenging math problems.
Since I canβt get it out of my head, I wrote up my thoughts on @kevinbaker.bsky.social's critique of AI-automated science and the logical end of processes that can't self-correct.
Kevin Baker's essay is probably the best thing I have read in 2025.
Yes. Just write your thoughts in a rough and unpolished form, say rough paragraphs that contain terse points you want to make. then let 'er rip
Section 7 is a wonderful description of the process they went through.
something just isn't fully clicking. if you look at total yards and time of possession, they should have blown them out. well, better anyway to peak later in season, so let's hope that's what happens (like two seasons ago)
Packers get the win, but it wasn't pretty.
Thanks for participating and presenting your work!
Google promotes box shirts too
Pour into
Announcing the first workshop on Foundations of Language Model Reasoning (FoRLM) at NeurIPS 2025!
πSoliciting abstracts that advance foundational understanding of reasoning in language models, from theoretical analyses to rigorous empirical studies.
π Deadline: Sept 3, 2025
Nice article about my momβs new book shepherdexpress.com/culture/book...
βthe only way to predict or to control the functioning of such systems is by an intricate system of charms, spells, and incantationsβ
See you there!
More likely midges. The truest sign of a healthy ecosystem
Looking forward to a great MMLS!
This is collaboration with Ziyue Luo, @shroffness and @kevinlauka
Jifanβs on the industry job market now, and his expertise in efficient training, distillation, and data curation couldn't be more timely. Feel free to reach out to him at jifan@cs.wisc.edu.
π Paper: arxiv.org/abs/2410.02755
SIEVE improves upon existing quality filtering methods in the DataComp-LM challenge, producing better LLM pretraining data that led to improved model performance.
This work is part of Jifan's broader research on efficient ML training, from active learning to label-efficient SFT for LLMs.
Why does this matter? High-quality data is the bedrock of LLM training. SIEVE enables filtering trillions of web data for specific domains like medical/legal text with customizable natural language prompts.
SIEVE distills GPT-4's data filtering capabilities into lightweight models at <1% of the cost. Not just minor improvements - we're talking 500x more efficient filtering operations.
π§΅ Heard all the buzz around distilling from OpenAI models? Check out @jifanz's latest work SIEVE - showing how strategic distillation can make LLM development radically more cost-effective while matching quality.
Maybe Trump should have read my mom's book: "For the first six weeks, the embryo, whether XX or XY, coasts along in sexual ambiguity." p. 25
Task vectors are akin to punchcards: you feed them to your LLM and it implements specific tasks, without in-context demonstrations. Liu's new paper examines at what scale, where in the network and when during training do they emerge, and how to encourage their emergence.
arxiv.org/pdf/2501.09240
Good luck with that
p.s. we don't know for sure if I said this or not