Pavel Tolmachev's Avatar

Pavel Tolmachev

@pawa-pawa

Researcher in Computational Neuroscience at @PrincetonNeuro, with focus on neural representations, RNNs and reinforcement learning

49
Followers
31
Following
4
Posts
20.07.2024
Joined
Posts Following

Latest posts by Pavel Tolmachev @pawa-pawa

Post image

Finally, these distinctions are symptomatic of deeper differences: Tanh and ReLU/sigmoid RNNs discover distinct circuit solutions to context-dependent decision-making task. Differences in circuitry become critical when RNNs are exposed to novel stimuli outside the training range.

24.10.2025 19:14 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

We further show that RNNs with different activation functions exhibit distinct dynamics, as characterized by the configuration of fixed points and trajectory end points, with tanh RNNs consistently displaying significant divergence from ReLU and sigmoid ones.

24.10.2025 19:14 ๐Ÿ‘ 6 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

The choice of activation function in RNNs is often assumed to minimally affect its trajectories. We analyzed ReLU, sigmoid, and tanh RNNs on diverse tasks, revealing differences in their neural trajectories and individual neuron responses, challenging this assumption.

24.10.2025 19:14 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
Single-unit activations confer inductive biases for emergent circuit solutions to cognitive tasks - Nature Machine Intelligence Recurrent neural networks are widely used to model brain dynamics. Tolmachev and Engel show that single-unit activation functions influence task solutions that emerge in trained networks, raising the ...

Excited to share our new work with @engeltatiana.bsky.social!

RNNs are often used to explore how the brain may solve specific tasks. We show that, depending on the architecture, RNNs find distinct circuit solutions, behaving differently when exposed to novel stimuli.
www.nature.com/articles/s42...

24.10.2025 19:14 ๐Ÿ‘ 27 ๐Ÿ” 10 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1