I mean, the only good thing about current international political affairs is that if they go all in on cannons maybe they stop building these dumbass tech
@bluedebris
Researcher in computational neuroscience. Interested in memory and perception! Working with a great team @BathellierLab, Institut de l'Audition, Paris. Personal page: https://aquaresima.github.io Julia Spiking Neural Networks: https://bit.ly/4mAbRly
I mean, the only good thing about current international political affairs is that if they go all in on cannons maybe they stop building these dumbass tech
Really? Who would have thought about it?!?
Considering they re popular among tech bros it could be good news
Okok, the comparison works only in ML. We can't look up the past inputs to make predictions about the future. Is reservoir just a possible memory implementation?
RC is indeed quite a cool computational framework, however it has shortcomings. 1. Where is the supervised decoder? 2. How do you motivate learning? (But see recent @bconfavreux.bsky.social work)
I dug the rabbit hole and found this:
www.nature.com/articles/s41...
You can do RC without network π€£
Hire people not AI
Recorded :) ?
Our review "Neuro-oscillatory models of cortical speech processing", authored by Olesia Dogonasheva, has finally been published. Check it below β¬οΈ
#compneuro #neuroskyence #EEG
When you are debugging and this song comes up three times on your playlist, you can't help but wonder, "Why do they scream OREGANO??"
link.deezer.com/s/31jU4pHH39...
Sure, in the end, everything is an RNN ;)
Mongillo's model indeed requires pre-formed attractors. However, that model opens up to other sub-threshold memory mechanisms that can maintain STM when LTMs are not present, yet.
So maybe if an RNN has memory that scales up with the size, it is a better model of activated LTM rather than of STM. No?
4-7 *new* items! If you learn to use a retrieval structure, you go pretty big - as all those crazy memorists do!
online WM of novel items require variable binding, and I doubt you can get it in RNNs.
Check this out for a cogsci perspective
pmc.ncbi.nlm.nih.gov/articles/PMC...
IMHO. Because it does not have "memory slots" ready for it, if it has memory slots, it lacks the machinery to convert them back into sensory representations. But you're the RNN expert, if you say it, I believe you ;)
Can it distinguish a shape it saw once from one it never saw? And despite visual distractors?
Maybe we do have five stable attractors in pfc and use them to re-cue sensory perceptions. But we need to be clear about active/silent interactions to move forward in this direction.
True. However, the attractor `iff` pers. activity is the case only if we consider the firing rate as the only variable at play in the brain. Any cognitive theory of WM requires the allocation of new items on the fly. Can we build attractors through single-shot learning? Or we repurpose the same?
True! However, we have learned about long-term memory formation and maintenance from snails. Divergent evolutionary paths can yet implement similar computational mechanisms. IMHO, recurrent activity is not as prominent in monkey's PFC as in fruitfly MB, but the comparison holds.
If you're still updating the starter pack, add me in ;)
I want to be the guy on the left. libera nos domine de AI
In the image I see a mid age person with a wise face and - probably - mastering a complex software Vs a youngster with a naive smile clicking buttons on an anonymous interface while the world burns
Then send a link to the blog ;)
At #BernsteinConference heated debate on WM mechanisms. I love that this is still far to be settled!
trained RNNs from A.Compte aligns with fruitfly direction WM data from K.Nagel: it is persistent.
Large scale recording in macaque V1 in T.Moore Lab: it is not.
#neuroskyence what's your take?
And concluding with a beautiful thermodynamics of learning:
The correct learning rate depends on the noise of the training set (sigma eta)!
And reminding the great neuroscientist **and middle east pacifist** Daniel Amit!
So cool to walk from Ising models to recurrent network dynamics at the Braitenberg prize ceremony of Sara Solla! At #bernstein conference #compneurosky
The library strikes a balance between creating new models and composing them in fairly complex networks.
If you would like some specific feat and cannot achieve it, just contact me and we'll find a way together!
Hey #neuroskyence and #compneurosky
I recently released this #Julia library for simulating Spiking
Neural Networks!
bit.ly/4mAbRly
As with everything in Julia, it is fast, elegant, and compact!
Check out the documentation and the tutorials to get an idea!
Where do they get all this money to infest our market anyway???
And yess! The students loved it :)
Hey #neuroskyence #compneurosky
I made this interactive tool for showing the phase planes of 2D neuron models to studs!
Have fun :)
github.com/JuliaSNN/Com...