Donato Crisostomi ✈️ NeurIPS's Avatar

Donato Crisostomi ✈️ NeurIPS

@crisostomi

🧑‍🎓 PhD stud. @ Sapienza, Rome 🥷stealth-mode CEO 🔬prev visiting @ Cambridge | RS intern @ Amazon Search | RS intern @ Alexa. 🆓 time 🎭improv theater, 🤿scuba diving, ⛰️hiking

68
Followers
145
Following
15
Posts
17.11.2024
Joined
Posts Following

Latest posts by Donato Crisostomi ✈️ NeurIPS @crisostomi

Will present this at #CVPR ✈️ See you in Nashville 🇺🇸!

Kudos to the team 👏
Antonio A. Gargiulo, @mariasofiab.bsky.social, @sscardapane.bsky.social, Fabrizio Silvestri, Emanuele Rodolà.

11.03.2025 08:02 👍 5 🔁 2 💬 0 📌 0
Preview
Task Singular Vectors: Reducing Task Interference in Model Merging Task Arithmetic has emerged as a simple yet effective method to merge models without additional training. However, by treating entire networks as flat parameter vectors, it overlooks key structural in...

Don’t miss out on these insights and more — check out the paper!

📄 Preprint → arxiv.org/abs/2412.00081

💻 Code → github.com/AntoAndGar/t...

Joint work w/ Antonio A. Gargiulo, @mariasofiab.bsky.social, @sscardapane.bsky.social, Fabrizio Silvestri, Emanuele Rodolà.

(6/6)

08.01.2025 19:00 👍 2 🔁 2 💬 0 📌 0
Video thumbnail

By orthogonalizing the (low-rank approximated) task singular vectors, we effectively eliminate interference.

This leads to an impressive +15% gain without any test-time adaptation!

The improvement is consistent across all datasets.

🧵(5/6)

08.01.2025 19:00 👍 1 🔁 0 💬 1 📌 0
Post image

We believe this happens due to inter-task interference, which we measure as the interplay between singular vectors from different tasks (A).

(B) This interference is higher in shallower (more general) layers and lower in deeper (more task-specific) layers!

🧵(4/6)

08.01.2025 19:00 👍 0 🔁 0 💬 1 📌 0
Post image

But is low-rank approximation enough to achieve effective multi-task merging?

The answer is NO! In fact, it can even be detrimental.

Why is that?

🧵(3/6)

08.01.2025 19:00 👍 0 🔁 0 💬 1 📌 0
Post image

Let’s start with the low-rank structure:

By keeping only a small fraction (e.g., 10%) of task singular vectors for each model, average accuracy is preserved!

🧵(2/6)

08.01.2025 19:00 👍 0 🔁 0 💬 1 📌 0
Video thumbnail

📢Prepend “Singular” to “Task Vectors” and get +15% average accuracy for free!

1. Perform a low-rank approximation of layer-wise task vectors.

2. Minimize task interference by orthogonalizing inter-task singular vectors.

🧵(1/6)

08.01.2025 19:00 👍 4 🔁 3 💬 1 📌 1

📣 Come check it this Friday at #NeurIPS!

09.12.2024 16:38 👍 2 🔁 0 💬 0 📌 0
Preview
$C^2M^3$: Cycle-Consistent Multi-Model Merging In this paper, we present a novel data-free method for merging neural networks in weight space. Differently from most existing works, our method optimizes for the permutations of network neurons globa...

🤝Joint work with Marco Fumero, Daniele Baieri, Florian Bernard, Emanuele Rodolà

Preprint -> arxiv.org/abs/2405.17897
Code -> github.com/crisostomi/c...

BTW if you made it to this last tweet you are probably also interested in our workshop --> unireps.org

Thanks! (6/6)

05.12.2024 08:37 👍 0 🔁 0 💬 0 📌 0
Post image

Yes, and it works best when applying Repair (Jordan et al, 2022) to fix the activation statistics!

The approach is designed to merge 3+ models (as CC doesn't make much sense otherwise), but if you are curious about applying Frank-Wolfe for n=2, please check the paper!

(5/6)

05.12.2024 08:37 👍 0 🔁 0 💬 1 📌 0
Post image

Yeah but why bother?

1) Optimizing globally, we don't have any more variance from the random layer order

2) The models in the universe are much more linearly connected than before

3) The models in the universe are much more similar

Does this result in better merging?
(4/6)

05.12.2024 08:37 👍 0 🔁 0 💬 1 📌 0
Post image

Ok but how?

1) Start from the weight-matching equation introduced by Git Re-Basin

2) Consider perms between all possible pairs

3) Replace each permutation A->B with one mapping to the universe A -> U and one mapping back U -> B

4) Optimize with Frank-Wolfe

(3/6)

05.12.2024 08:37 👍 0 🔁 0 💬 1 📌 0
Post image

If you have 3+ models A, B, and C, permitting neurons from A to B to C and back to A gets you to a different model, as the composition of the perms is not the identity!

What then? Introduce a Universe space 🌌and use it as a midpoint 🔀

This way, CC is guaranteed!

(2/6)

05.12.2024 08:37 👍 0 🔁 0 💬 1 📌 0
Post image

I know you're probably thinking, "Yeah, these neuron-permutation-based model merging methods are cool.. but are they cycle-consistent (CC)?"

Say no more!
It just so happens that our new #NeurIPS24 paper covers exactly this!

Huh? No idea what I am talking about? Read on
(1/6)

05.12.2024 08:37 👍 2 🔁 1 💬 1 📌 1

First blue post (still have to figure out how tweets are called here)

💡idea: we consider task vectors at the layer level and reduce task interference by decorrelating the task-specific singular vectors of any matrix-structured layer

🔬results: large-margin improvements across all vision benchmarks

05.12.2024 08:11 👍 7 🔁 2 💬 0 📌 0