Excited to share our ICLR and NAACL papers! Please come and say hi, we're super friendly :)
Excited to share our ICLR and NAACL papers! Please come and say hi, we're super friendly :)
For the paper please come by β
arxiv.org/abs/2503.04377
Slightly lazy but feel need to post this in case it is too late... We will present this in the ICLR Workshop on Sparsity in LLMs (SLLM)! We found that the representation dimension can dominate the model performance in the structured pruning π€―
#ICLR2025 #LLM #sparsity
Oh a square owl π«π¦
Do LLMs need rationales for learning from mistakes? π€
When LLMs learn from previous incorrect answers, they typically observe corrective feedback in the form of rationales explaining each mistake. In our new preprint, we find these rationales do not help, in fact they hurt performance!
π§΅
I saw the messages, millions of thanks:D
Thanks! The links are appreciatively detailed, but I failed to find the scheme for this orange ring (2DBA for this juvenile)...?
Ohhhhhhh so sweet π₯Ή It's near Round Pond right? BTW, may I kindly ask if there are publicly available datasets with information on the tagged birds (or tag ID description, etc)?
Happy New Year Everyone!
Here are the greetings from spring πΈπ
Welcome to Bluesky to more of our NLP researchers at Imperial!! Looking forward to following everyone's work on here.
To follow us all click 'follow all' in the starter pack below
go.bsky.app/Bv5thAb