Another throughline: models went from absurdly over-simplified to absurdly over-complicated. In parallel, we started out creating simple models to understand the brain and ended up studying complex models for their own sake
Another throughline: models went from absurdly over-simplified to absurdly over-complicated. In parallel, we started out creating simple models to understand the brain and ended up studying complex models for their own sake
I would say that the late 2000s optogenetics honeymoon led to a focus on circuit cracking, but more recently thereβs been a revival of good old record a million neurons and correlate some stuff. We need both. Reduce when possible but recognize complexity too
thank you! We had a great discussion at journal club about your work
I saw someone say "nip it in the butt" instead of "nip it in the bud" in a slack channel and I felt like I should say something but didn't
I see. Thanks! Word processing, tax prep software, and a few lab things (CAD, PCB layout, etc) are the last few legacy apps I can't quite quit. Some day!
Word 2010 was good for its day though. It's served me well for 16 years
You don't exchange documents with students or colleagues with "track changes" enabled?
I'm ready to leave Word 2010 (running in Wine on Linux). LibreOffice is too buggy for me. GDocs is missing the killer feature of "view as if all tracked changes were accepted". I work better with WYSIWYG (so not Latex). I don't want to work in the cloud. Is typora the way to go?
Q2) It seem you find representation neurons connect to PE neurons with the "wrong" sign. I think Larkum proposed architecture that also has "wrong" sign wrt predictive processing: matches between prediction and input are facilitated not cancelled. Does that match your data? doi.org/10.1016/j.ti...
Hi Anna, I've read the paper now, great work! Wanted to ask about Fig 9 (and about JEPAs generally). You point out downsides to computing PEs in the input space. But what is the value of two networks predicting each other? What keeps their representations useful and grounded in reality?
www.reddit.com/r/Programmer...
Second preprint from the lab. Collab with @dkoveal.bsky.social, with many more to come! Effort led by @xshirleyz.bsky.social with help from Brittany Addison, @ezeyulu00.bsky.social, Claire Deng (on the grad school market, better act fast, Claireβs amazing!), @ajemanuel.bsky.social, and many others!
I get that the knob is useful, just not how this makes the results exempt from the concern that you mentioned, which is that never know exactly why you got the representations you got, still could be the things you mention (optimization, initialization, architecture, network size...)
"Conventional training yields single network with no way to explore how internal representations vary. Hard to know what that resemblance reflects... Ξ³ provides the missing knob" Can you help me understand? Even with knob, can still get a variety of possible networks for a variety of reasons
Ha, I imagine that the thrust of that feedback was probably to encourage concise threads, not long blog posts, but looks good in any case! π
Yang 2016. Theoretical perspectives on active sensing. https://doi.org/10.1016/j.cobeha.2016.06.009 Parker 2020. Movement-Related Signals in Sensory Areas: Roles in Natural Behavior. https://doi.org/10.1016/j.tins.2020.05.005 Perich and Rajan 2020. Rethinking brain-wide interactions through multi-region βnetwork of networksβ models. https://doi.org/10.1016/j.conb.2020.11.003 Keller and Mrsic-Flogel 2018. Predictive Processing: A Canonical Cortical Computation. https://doi.org/10.1016/j.neuron.2018.10.003 Clark and Chalmers 1998. The extended mind. http://dx.doi.org/10.1093/analys/58.1.7
I don't know if I'd go so far as "mandatory reading for a future generation", but here's an excerpt from mine! Our focus is on active sensing and sensorimotor interactions.
trois rivieres, of course
The Google Summer of Code and Neuroinformatics Unit logos next to each other.
Do you want to spend the summer being paid to contribute to open source neuroscience software?
We are taking part in Google Summer of Code again, offering paid, remote placements to work on one of our tools. Open to (nearly) everyone worldwide.
More details: neuroinformatics.dev/get-involved...
7/7 I relearned the ancient lessons: 1) your time sync is only as good as your path to the NTP root (eg, atomic clock); 2) extensive averaging will eventually tamp down the noise. Surprisingly, this actually works!
Thanks to chrony-project.org & open-source developers everywhere for your hard work!
6/7 Caveat: not a real "error" but rather chrony's estimate of error.
Often chrony overestimates err: it thinks there's a big pos err followed by big neg err (e.g., after wifi delay). I am sure other times it underestimates.
In expt, we want Pis synced with each other, even if off from reality.
5/7 Key settings: 1) use a local NTP server; 2) incr maxdelaydevratio to 1e3 (otherwise it drops polls with wifi delay, which happen right during expt!); 3) decr poll time to 2s and set median filter 7 to handle crazy wifi delay; 4) incr corrtimeratio from 3 to 1e6 (!) to avoid overcorrection
Same setup as in previous figure, but for the optimized conditions. All panels show much more frequent temporal resolution now that polling is more frequent. Top panel: note frequent excursions in clock rate on each Pi (I never figured out how to stop this). Middle panel: Time offsets are now generally <1 ms. Node large black offset at noon is not a real error but a glitch after changing a setting. Bottom panel: rolling error is stably at 10 us. Each pi's error tracks the desktop error and is in fact slightly better due to a longer median filter on the pi canceling out errors.
4/7 After optimizing chrony settings (see next post), performance much better! Typical clock error is 10 us. (NB: not a real "error" - see caveats at end.) Excursions >1ms are rare. Note Pi clock error (bottom panel) tracks the desktop's error (black trace). π
Three panel figure. Top panel: clock frequency offset (adjustment applied to each device's clock in ppm). Desktop (black) is stable. Pis (colored are variable). Middle: clock time offset (error in ms). Many deviations above 1 ms. Bottom: rolling clock offset (median log abs deviation). Device vary hugely from 1 us to 1 ms. Note: irregular sampling due to polls dropped for excessive wifi delay, which becomes very frequent during the experiment just when rapid polling would be most helpful.
3/7 With chrony defaults, performance was poor. In each fig, black is desktop, pis are colored. Top is clock frequency, middle is clock "offset" (error), bottom is smoothed error. Expt runs around 15:00 (note freq dip as CPU heats). Time offsets all over the place. π
2/7 Setup: desktop PC with an ethernet connection, running expt on 4 Pis using our software (inspired by @jo.nny.rip who also told me about chrony). Q: How close can we get their clocks? Approach: Desktop serves time to the Pis, and chrony client on each Pi adjusts its clock to stay in sync.
1/7 How tightly can 4 raspberry pis be synchronized over wifi? Our experiments require ~1 ms synchronization for reporting sound onsets. It seemed crazy to try to achieve this but with chrony (chrony-project.org) it actually works! Details in thread
Why do you say "obstinate"?
To some extent you would expect this effect right? Seems impossible for a "pioneer" to report "never used it", or a "hell no" person to report "use it all the time". I suspect causality runs "opinion on AI" -> "usage of AI", not the other way around here.
The point of building is not just to throw the finished product over the line; if youβre doing it right, the process shapes the creator, too.
Not to get too carried away with optimism or anything π
I agree. When folks rely on LLMs to generate their code or writing, they may (at best) save their own time, but they're generating tons of work for others to verify, validate, or deal with errors in that content. And yeah, it's ironic that the screenshot below was (allegedly) generated by an LLM
Polls now don't reflect incoming disinfo campaign before midterms. While Rs hear about looting and bathrooms, Ds will hear about corrupt pols, interchangeable parties, and hypocritical elites (effective msgs on idealists). They don't have to change your mind, just demotivate you. Don't stop caring!