6/6 For more details, see:
Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project ππ
6/6 For more details, see:
Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...
Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project ππ
5/6 Finally, using our influence values, we pick a small subset & fine-tune the model. In our evaluation, we use 4 SOTA influence functions -- NN-CIFT achieves the same performance while using a model 34,000x smaller!
4/6 Second, we train the InfluenceNetwork using basic mini-batch gradient descent, then let it estimate the influence for the remaining data. It has a very low error of 0.067!
3/6 First, the neural network (called the βInfluenceNetworkβ) needs to be trained. We compute influence values using existing methods -- but only for a tiny fraction of data (just 0.25%-5%).
2/6 Estimating the value of data is expensive.
Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.
Hereβs how it works:
πVery excited about my new paper!
NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!
Elated to announce that DELIFT has been accepted to ICLR'25 π Looking forward to discussing it in Singapore!
Congratulations to @dilekh.bsky.social for her ACL Fellowship! πππ www.aclweb.org/portal/conte...
The last response from Gemini in this thread may shock you: gemini.google.com/share/6d141b...
Thank you Guneet! Would love to hear more about these stress tests :)
π
Hey! Would love to be added :)
Can LLMs make us critical thinkers?
TreeInstruct reorients assistant-like LLMs to be instructors that guide students towards understanding their mistakes, without providing direct/indirect answers.
Check out aclanthology.org/2024.finding... (w/ @wonderingishika.bsky.social) to learn more!
All around the theme of data-efficient NLP:
(1) using influence functions to improve language model performance from less data
(2) enabling language models to generate queries for things it doesn't know
For more details, see:
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...
Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! ππ
3. Continual fine-tuning: given a fine-tuned model, enabling it to integrate new and complementary information while mitigating catastrophic forgetting. We find that reducing the dataset helps remove samples that hinder performance, surpassing the performance of the full dataset.
2. Task-specific fine-tuning: given an instruction-tuned model, refining the LLM's expertise in specific domains. We find that pruning the dataset removes noise and keeps relevant examples, achieving better performance than fine-tuning on the full dataset.
1. Instruction tuning: given a base model, fine-tuning a model to follow general instructions. We find that performance drops are minimal when reducing the dataset by 70%.
DELIFT quantifies the information present in a sample wrt an LLM's capabilities. Using submodular functions, DELIFT can automatically adapt the chosen subset based on the objectives in the 3 stages of language model fine-tuning:
I'm so excited to share my latest paper called DELIFT along with Krishnateja Killamsetty, Lucian Popa, and Marina Danilevksy at IBM Research π
We tackle expensive fine-tuning by selecting a small subset of informative data that targets a model's weaknesses.
TreeInstruct is preferred 78.43% of the time. It solves 14.09% more bugs across all settings, and our questions are 14.18% better at addressing bugs, maintaining relevance, and ensuring logical conversation flow. TreeInstruct also adapts to human students of varying backgrounds.
TreeInstruct estimates the knowledge a student needs to debug their code and devises a conversation plan. It then dynamically constructs a question tree based on its interactions with the student, navigating the knowledge state space till the student comprehends & fixes all bugs.
github.com/agarwalishik...
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.
Can LLMs make us critical thinkers?
TreeInstruct reorients LLMs to be instructors that guide students socratically to solve problems, instead of assistants that provide direct answers.
Check out our EMNLP2024 paper at arxiv.org/abs/2406.11709 (w/ @pkargupta.bsky.social) to learn more!
I'd love to be added - thank you!!