My understanding is that it is wise for them to wait on ads until they have sorted out their profit/non-profit status, so that the non-profit is worth as little as possible: www.bloomberg.com/opinion/arti...
My understanding is that it is wise for them to wait on ads until they have sorted out their profit/non-profit status, so that the non-profit is worth as little as possible: www.bloomberg.com/opinion/arti...
@sebastianraschka.com Hey, a fellow listener to @atp.fm (I infer from today's show, 26:55). I have often wondered whether the audience for that show overlapped with my AI/NLP network.
Joe Boyd is also a pivotal figure in what is probably my favorite podcast episode of all time: @99pi.org episode 141, "Three Records from Sundown", about Nick Drake: 99percentinvisible.org/episode/thre...
I misunderstood a reference to "Music from Big Pink" in Tyler Cowen's recent interview with Joe Boyd, and now I have spent an entire weekend listening to the band "The Big Pink" on a loop β absolutely perfect for a lost weekend at one's desk: en.wikipedia.org/wiki/The_Big...
And a big thank you to everyone who came to the talk itself. The discussion period after was really rich and wide-ranging.
I thank lots of people at the very end for their role in shaping this work. A special shout-out to @aryaman.io for creating CausalGym, which made it very easy for me to conduct all the intervention-based analysis in the talk: github.com/aryamanarora...
I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpus evidence, targeted syntactic evaluations, and causal intervention-based analyses: youtu.be/DBorepHuKDM
This, from James Gandolfini, is one of the best line deliveries in all of cinema: youtu.be/2GW_KjMoLPw?...
I hope those 2 citations are floating around out there for you, but you can also toast to continued year-over-year 200%+ citation count increases in 2025!
I am very fortunate β I experience mostly thoughtful comments here and on Twitter, and so Twitter mostly just offers me more of that. In addition, I do not feel that BlueSky is intrinsically a more considered or caring place than Twitter. I've seen truly awful attacks in both places.
I would like to leave Twitter, but I get engagement from a really broad range of people there, and that's what I am looking for from social media. I like to encourage people getting into my field, and I benefit from consuming the full smorgasbord of hot takes I read there.
There may be a bubble, but I think I'd still bet in their favor. It would sound to me like another parallel with Amazon β perhaps the most famous case of a company that was predicted to never be profitable, is sometimes still described that way, but has a market cap of $2.2T.
I am confident OpenAI will become profitable. They are smart, creative, highly incentivized, and well-funded. On the other hand, any app/company that depends on capturing most of the value from OpenAI's models has an uncertain future, like the Twitter apps of old.
Bill Labov died this morning. I'm not coherent enough to talk about how important and influential and brilliant he was. I am very sad.
I was so lucky to know him, and I am grateful every day that he (and Gillian, and Walt, etc) built an academic field where kindness is expected.
Announcement #1: our call for papers is up! π
colmweb.org/cfp.html
And excited to announce the COLM 2025 program chairs @yoavartzi.com @eunsol.bsky.social @ranjaykrishna.bsky.social and @adtraghunathan.bsky.social
Ok but this last episode of Bobβs Burgers was so wonderful #bobsburgers
I found it so touching! The Belchers are rare among TV families in being totally supportive of each other. The conflict between the sisters in this episode was so realistic, and treated seriously, and the episode itself was still also very funny.
MoEUT: Mixture-of-Experts Universal Transformers
RΓ³bert CsordΓ‘s, Kazuki Irie, JΓΌrgen Schmidhuber, Christopher Potts, Christopher D Manning
Fr, Dec 13, 16:30 PST - Poster Session 6 East
ReFT: Representation Finetuning for Language Models
Spotlight Poster
Zhengxuan Wu Β· Aryaman Arora Β· Zheng Wang Β· Atticus Geiger Β· Dan Jurafsky Β· Christopher D Manning Β· Christopher Potts
Fri 13 Dec 07:00 PM UTC [West Ballroom A-D]
Papers (partly) from @stanfordnlp at #NeurIPS 2024:
Oral: Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making
Manling Li Β· Shiyu Zhao Β· Qineng Wang Β· Kangrui Wang Β· β¦ Β· Weiyu Liu Β· Percy Liang Β· Li Fei-Fei Β· Jiayuan Mao Β· Jiajun Wu
Wed 11 Dec 11:50 PM UTC [East Ballroom A, B]
My primary role as a Department Chair at Stanford has become complaining about bureaucratic overreach at Stanford. I have send dozens of messages on this topic just this quarter. And yet I have still not mastered the spelling of "bureaucratic".
Group picture of people in the Stanford NLP Group gathered in front of the shores of Lake Tahoe.
Natural Language Processingβartificial intelligence that uses human languageβhas been on a roll lately. Youβve probably noticed! So the Stanford NLP Group has been growing, and diversifying into lots of new topics, including agents, language model programs, and socially aware #NLP.
nlp.stanford.edu
The idea certainly takes some getting used to, and it still seems very mysterious to me if I think about it in a focused way for too long!
Yes, you have a hammer, everything looks like a nail. For AI, we've entered an era in which people basically say, "I want to build something with hammers. I don't care what it is. Using hammers is my main requirement."
Listening to this awesome talk from @cgpotts.bsky.social .. so in love with the message here ..
As I'm building systems the most common questions (and review comments) I get asked is about the LL(M)M I'm using and not the systems and the problems they're solving ..
youtu.be/vRTcE19M-KE?...
Article on compositionality with @cgpotts.bsky.social in the new MIT Open encyclopedia of cognitive science! Check it out here: oecs.mit.edu/pub/e222wyjy.... Thanks to @asifamajid.bsky.social and Michael Frank for the opportunity!
Yes, I am so bummed about this! I keep looking in vain for the old menu and clicking what turns out to be the Templates button, which I never use.
On my reading, that passage shows that they were already considering prompt optimization and decoding time strategies to be adaptations, and the report covers tool-related things as well (RAG). This is what one would expect from the premise that FMs are (important) components of larger solutions.
I don't feel positioned to stand by everything in that report (but rather only my section). However, the above quote says "adaptation". Adaptation covers many things beyond fine-tuning. One could argue that it covers so many things as to be vacuous, but not that it was too narrow.
I also recommend the one where @trishacode.com and a friend drop a lambo from space. The tone somehow manages to be reverential and disinterested at the same time: youtu.be/4PKuluE_o1A?...