Same boat as your AC
Same boat as your AC
Could you add me please?
CBOW vs. Skip-gram
Great work! Are you going to release the models?
A starter pack for #NLP #NLProc researchers! ๐
go.bsky.app/SngwGeS
#EMNLP has a nice set of tokenization/subword modeling papers this year.
It's a good mix of tokenization algorithms, tokenization evaluation, tokenization-free methods, and subword embedding probing. Lmk if I missed some!
Here is a list with links + presentation time (in chronological order).
First time ML/NLP Bluesky feels alive.
This helped a lot!
I make sure to even delete paths with my username from code in supplementary material
TIL that the ACL Wiki has/had a state-of-the-art overview:
aclweb.org/aclwiki/Stat...
It also works with Flash Attention 2, although I don't see additional speedups. I don't think FA is optimized for generation.
Conceptually it is clear that this works but I wasn't aware that huggingface passes this through correctly.
Github Gist to reproduce:
gist.github.com/raphael-sch/...
You have to place the padding tokens in between the prefill and input tokens (example with 3 prefilled tokens):
input_ids: [0, 0, X, X, X, X]
position_ids: [0, 0, 3, 4, 5, 6]
attn_mask: [1, 1, 1, 0, 0, 1, 1, 1, 1]
Turns out that with the right attention_mask and position_ids you can prefill tokens AND pad batches in huggingface transformers. This speeds up inference, especially if if each instance has the same system prompt prepended. Code below โ