The "alt" texts for videos are not displaying for me on the bluesky website. Those contain credits. We want to explicitly mention the zebrafish video which was taken from: Torrents et al 2025 (eLife) available on idtracker.ai/latest/index...
@octoscience
PostDoc at the Kavli Institute for Systems Neuroscience at NTNU Trondheim. Whitman Scientist at the Marine Biological Laboratory (MBL) in Woods Hole, MA. I am studying sleep in octopuses and cuttlefish. π
The "alt" texts for videos are not displaying for me on the bluesky website. Those contain credits. We want to explicitly mention the zebrafish video which was taken from: Torrents et al 2025 (eLife) available on idtracker.ai/latest/index...
See the original OCTRON thread and the biorxiv preprint here: bsky.app/profile/octo...
12/12
We are in the progress of updating the documentation, including command line usage, under octron-tracking.github.io/OCTRON-docs/
11/12
Stay tuned for future updates!
OCTRON is under active development and we are adding more and more features.
E.g. does anyone want to see key (anchor) point tracking and skeletons in OCTRON? ππΌββοΈ
10/12
None of this would be possible without the amazing @napari.org community. After predicting your videos, just drag and drop the prediction folder back into napari and explore the results. And did you know you can link layers in napari to adjust their parameters all at once?
9/12
If you use segmentation models for prediction, you now also have the option to calculate additional parameters. These will be exported alongside your detected regions for each video you analyze. If you donβt use the GUI you can even inject your own functions here (documentation coming).
8/12
Both segmentation and object detection models work with our integrated SOTA multi object trackers so the performance and reliability for multi object tracking is actually the same! Here is an example of a lightweight detection model (YOLO26m) trained in OCTRON on multiple zebrafish.
7/12
We introduced a global switch: Now you can use the same annotation data to train either heavy/detailed segmentation models or light/positionβonly detection models. The latter is faster and inference is much quicker - especially for multiβobject tracking. Perfect for realβtime systems.
6/12
Now you can generate tons of video training data with 10-100+ objects in minutes in OCTRON and use it to train fast YOLO models. We already offer segmentation models (highβres masks + precise positions), but theyβre heavy. What if you just want fast single or multi object position tracking?
5/12
The SAM3 model is powerful and works well for complex scenes. If you give your labels an expressive name, like βdark cellβ, you are implicitly helping the network to find these structures. Here are some chromatophores in squid skin (color cells) for your viewing pleasure.
4/12
But what if you wanted to annotate a school of fish? At a minimum you would have to click on all the fish individually, and in the worst case create a new annotation layer for every fish. Enter SAM3 βmultiβ, which finds similar objects based on single prompts. Itβs magic!
3/12
SAM2 models in OCTRON make it possible to annotate subjects with fine anatomical detail. You can also group multiple objects under a single label class by placing them on separate layers, each distinguished with its own suffix.
2/12
BIG UPDATES!
OCTRON now supports the new SAM3 models (alongside SAM2)!
Weβve also added a global switch for Detection vs. Segmentation: Use the same annotations to train either lightweight detection or full segmentation models. Plus, weβve added support for the new YOLO26 models. ‡οΈ
1/12
Drawing of the "Aeon" project, an open-source platform to study the neural basis of ethological behaviours over naturalistic timescales.
Are you a neuroscientist with great coding skills, or a software engineer interested in the brain?
We are recruiting for a research software engineer to help us build pipelines to process weeks of neural and behavioural recordings from freely moving animals.
More details: bit.ly/rse-2026a
Iβm building a foundational reading list for our lab (systems & circuit neuroscience, compneuro, modeling, neuromodulators, population coding etc.).
Iβd like to crowdsource recommendations.
Which review(s) would you consider mandatory reading for the next generation of researchers?
We have PhD positions and Post Doc positions available! www.med.uio.no/imb/english/... Please apply if you are interested in human invasive intracranial recordings or applied machine learning in a neuroscientific context.
Deadlines 1st of March (Sunday!) and 4th of March.
Please RT!
Two-photon calcium imaging at 24,000 lines/s, with the resonant axis spanning 4x what other systems can do. Inertia-free. Diffraction-limited. No tradeoffs. Che-Hang Yu developed a 4x angle multiplier for laser scanning. His paper is out today: opg.optica.org/optica/fullt... 1/n #fluorescenceFriday
It really is. Stay safe. π€πΌ ππ€πΌπ
I second CAR-Ts! π
Don't worry! π
I remember when your preprint popped up! www.biorxiv.org/content/10.1... (ours: www.biorxiv.org/content/10.1...) it was cool to see it being used by you too. I do think it's quite common that people converge on the same solution at the similar times in our fields, and it felt validating tbh. (:
Not to throw shade on your publications and discoveries, but for Moran's I in neuro-topographical analysis I am not sure you can claim you introduced it to the field www.pnas.org/doi/full/10....
Really nice work from the Kentros lab β repeating chemogenetic manipulation of grid cells produces the same place cell remapping each time. Direct evidence that grid subfield rate changes predictably drive place field reorganisation. Love it https://doi.org/10.64898/2026.02.10.705142
I am glad you are voicing these concerns and I totally agree with you.
It's not in the docs, but yes, you could do that if you can calibrate your FOVs. We don't offer 3D calibration procedures atm, but that's something we hint at in the discussion of our preprint. So if you are interested in that this would definitely be something we could solve! @claynerd.bsky.social
You can use #OCTRON to track cells too!
It took 10 min to annotate multiple cells in 146 frames, 1.5 hrs to train a model with a GPU, and 10 sec to make these predictions. It doesn't get much easier than thatβ‘οΈ
We had a lot of fun doing these experiments! Theta sweeps in MEC and internal direction signals in parasubiculum track moving objects during pursuit and reverse during backward movement. Simultaneously recorded HD cells in other areas remain locked to head direction across behaviors:
Where does learning through imitation happen in the brain?
In juvenile zebra finches, we pinpoint a synaptic locus of song learning in a cortico-basal ganglia circuit and leverage this localization to measure the timescale of consolidation and make birds learn faster! #neuroskyence (1/14)
This felt good.
This must be one of the most wholesome videos I have ever seen, accompanying a publication ππhttps://www.youtube.com/watch?v=SGVo41MuLiQ