Horst Obenhaus's Avatar

Horst Obenhaus

@octoscience

PostDoc at the Kavli Institute for Systems Neuroscience at NTNU Trondheim. Whitman Scientist at the Marine Biological Laboratory (MBL) in Woods Hole, MA. I am studying sleep in octopuses and cuttlefish. πŸ™

996
Followers
1,114
Following
200
Posts
06.02.2024
Joined
Posts Following

Latest posts by Horst Obenhaus @octoscience

idtracker.ai β€” idtrackerai 6.0.13 documentation

The "alt" texts for videos are not displaying for me on the bluesky website. Those contain credits. We want to explicitly mention the zebrafish video which was taken from: Torrents et al 2025 (eLife) available on idtracker.ai/latest/index...

06.03.2026 14:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

See the original OCTRON thread and the biorxiv preprint here: bsky.app/profile/octo...
12/12

06.03.2026 14:09 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
OCTRON Documentation for the OCTRON project

We are in the progress of updating the documentation, including command line usage, under octron-tracking.github.io/OCTRON-docs/
11/12

06.03.2026 14:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Stay tuned for future updates!
OCTRON is under active development and we are adding more and more features.

E.g. does anyone want to see key (anchor) point tracking and skeletons in OCTRON? πŸ™‹πŸΌβ€β™€οΈ
10/12

06.03.2026 14:08 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

None of this would be possible without the amazing @napari.org community. After predicting your videos, just drag and drop the prediction folder back into napari and explore the results. And did you know you can link layers in napari to adjust their parameters all at once?
9/12

06.03.2026 14:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

If you use segmentation models for prediction, you now also have the option to calculate additional parameters. These will be exported alongside your detected regions for each video you analyze. If you don’t use the GUI you can even inject your own functions here (documentation coming).
8/12

06.03.2026 14:07 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Both segmentation and object detection models work with our integrated SOTA multi object trackers so the performance and reliability for multi object tracking is actually the same! Here is an example of a lightweight detection model (YOLO26m) trained in OCTRON on multiple zebrafish.
7/12

06.03.2026 14:06 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

We introduced a global switch: Now you can use the same annotation data to train either heavy/detailed segmentation models or light/position‑only detection models. The latter is faster and inference is much quicker - especially for multi‑object tracking. Perfect for real‑time systems.
6/12

06.03.2026 14:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Now you can generate tons of video training data with 10-100+ objects in minutes in OCTRON and use it to train fast YOLO models. We already offer segmentation models (high‑res masks + precise positions), but they’re heavy. What if you just want fast single or multi object position tracking?
5/12

06.03.2026 14:04 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

The SAM3 model is powerful and works well for complex scenes. If you give your labels an expressive name, like β€œdark cell”, you are implicitly helping the network to find these structures. Here are some chromatophores in squid skin (color cells) for your viewing pleasure.
4/12

06.03.2026 14:04 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

But what if you wanted to annotate a school of fish? At a minimum you would have to click on all the fish individually, and in the worst case create a new annotation layer for every fish. Enter SAM3 β€œmulti”, which finds similar objects based on single prompts. It’s magic!
3/12

06.03.2026 14:02 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

SAM2 models in OCTRON make it possible to annotate subjects with fine anatomical detail. You can also group multiple objects under a single label class by placing them on separate layers, each distinguished with its own suffix.
2/12

06.03.2026 14:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

BIG UPDATES!
OCTRON now supports the new SAM3 models (alongside SAM2)!
We’ve also added a global switch for Detection vs. Segmentation: Use the same annotations to train either lightweight detection or full segmentation models. Plus, we’ve added support for the new YOLO26 models. ‡️
1/12

06.03.2026 13:59 πŸ‘ 26 πŸ” 12 πŸ’¬ 2 πŸ“Œ 2
Drawing of the "Aeon" project, an open-source platform to study the neural basis of ethological behaviours over naturalistic timescales.

Drawing of the "Aeon" project, an open-source platform to study the neural basis of ethological behaviours over naturalistic timescales.

Are you a neuroscientist with great coding skills, or a software engineer interested in the brain?

We are recruiting for a research software engineer to help us build pipelines to process weeks of neural and behavioural recordings from freely moving animals.

More details: bit.ly/rse-2026a

05.03.2026 15:44 πŸ‘ 15 πŸ” 17 πŸ’¬ 1 πŸ“Œ 2

I’m building a foundational reading list for our lab (systems & circuit neuroscience, compneuro, modeling, neuromodulators, population coding etc.).

I’d like to crowdsource recommendations.

Which review(s) would you consider mandatory reading for the next generation of researchers?

01.03.2026 14:03 πŸ‘ 67 πŸ” 23 πŸ’¬ 7 πŸ“Œ 2
Preview
Cognitive Neurophysiology (CNP) - Institute of Basic Medical Sciences We are now hiring two PhDs and a Post Doc:Deadline 1st of March:https://www.jobbnorge.no/en/available-jobs/job/294553/phd-research-fellow-in-machine-learning-for-cognitive-neuroscienceDeadline 4th of ...

We have PhD positions and Post Doc positions available! www.med.uio.no/imb/english/... Please apply if you are interested in human invasive intracranial recordings or applied machine learning in a neuroscientific context.

Deadlines 1st of March (Sunday!) and 4th of March.

Please RT!

27.02.2026 14:40 πŸ‘ 18 πŸ” 10 πŸ’¬ 1 πŸ“Œ 2
Video thumbnail

Two-photon calcium imaging at 24,000 lines/s, with the resonant axis spanning 4x what other systems can do. Inertia-free. Diffraction-limited. No tradeoffs. Che-Hang Yu developed a 4x angle multiplier for laser scanning. His paper is out today: opg.optica.org/optica/fullt... 1/n #fluorescenceFriday

27.02.2026 20:16 πŸ‘ 99 πŸ” 23 πŸ’¬ 1 πŸ“Œ 1

It really is. Stay safe. 🀞🏼 πŸ€πŸ€žπŸΌπŸ€

25.02.2026 22:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I second CAR-Ts! πŸ™

25.02.2026 18:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Don't worry! πŸ’™

22.02.2026 18:22 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I remember when your preprint popped up! www.biorxiv.org/content/10.1... (ours: www.biorxiv.org/content/10.1...) it was cool to see it being used by you too. I do think it's quite common that people converge on the same solution at the similar times in our fields, and it felt validating tbh. (:

21.02.2026 22:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Preview
Functional network topography of the medial entorhinal cortex | PNAS The medial entorhinal cortex (MEC) creates a map of local space, based on the firing patterns of grid, head-direction (HD), border, and object-vect...

Not to throw shade on your publications and discoveries, but for Moran's I in neuro-topographical analysis I am not sure you can claim you introduced it to the field www.pnas.org/doi/full/10....

21.02.2026 18:27 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Really nice work from the Kentros lab β€” repeating chemogenetic manipulation of grid cells produces the same place cell remapping each time. Direct evidence that grid subfield rate changes predictably drive place field reorganisation. Love it https://doi.org/10.64898/2026.02.10.705142

16.02.2026 11:31 πŸ‘ 22 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1

I am glad you are voicing these concerns and I totally agree with you.

06.02.2026 12:39 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

It's not in the docs, but yes, you could do that if you can calibrate your FOVs. We don't offer 3D calibration procedures atm, but that's something we hint at in the discussion of our preprint. So if you are interested in that this would definitely be something we could solve! @claynerd.bsky.social

01.02.2026 14:19 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

You can use #OCTRON to track cells too!

It took 10 min to annotate multiple cells in 146 frames, 1.5 hrs to train a model with a GPU, and 10 sec to make these predictions. It doesn't get much easier than that⚑️

31.01.2026 21:11 πŸ‘ 7 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

We had a lot of fun doing these experiments! Theta sweeps in MEC and internal direction signals in parasubiculum track moving objects during pursuit and reverse during backward movement. Simultaneously recorded HD cells in other areas remain locked to head direction across behaviors:

28.01.2026 15:47 πŸ‘ 32 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0
A synaptic locus of song learning Learning by imitation is the foundation for verbal and musical expression, but its underlying neural basis remains obscure. A juvenile male zebra finch imitates the multisyllabic song of an adult tutor in a process that depends on a song-specialized cortico-basal ganglia circuit, affording a powerful system to identify the synaptic substrates of imitative motor learning. Plasticity at a particular set of cortico-basal ganglia synapses is hypothesized to drive rapid learning-related changes in song before these changes are subsequently consolidated in downstream circuits. Nevertheless, this hypothesis is untested and the synaptic locus where learning initially occurs is unknown. By combining a computational framework to quantify song learning with synapse-specific optogenetic and chemogenetic manipulations within and directly downstream of the cortico-basal ganglia circuit, we identified the specific cortico-basal ganglia synapses that drive the acquisition and expression of rapid vocal changes during juvenile song learning and characterized the hours-long timescale over which these changes consolidate. Furthermore, transiently augmenting postsynaptic activity in the basal ganglia briefly accelerates learning rates and persistently alters song, demonstrating a direct link between basal ganglia activity and rapid learning. These results localize the specific cortico-basal ganglia synapses that enable a juvenile songbird to learn to sing and reveal the circuit logic and behavioral timescales of this imitative learning paradigm. ### Competing Interest Statement The authors have declared no competing interest. National Institutes of Health, K99 NS144525 (DCS), F32 MH132152 (DCS), F31 HD098772 (SB), R01 NS099288 (RM), RF1 NS118424 (RM and JP)

Where does learning through imitation happen in the brain?

In juvenile zebra finches, we pinpoint a synaptic locus of song learning in a cortico-basal ganglia circuit and leverage this localization to measure the timescale of consolidation and make birds learn faster! #neuroskyence (1/14)

21.01.2026 16:39 πŸ‘ 71 πŸ” 26 πŸ’¬ 5 πŸ“Œ 7
Post image

This felt good.

21.01.2026 16:07 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Flexible use of a multi-purpose tool by a cow / Curr. Biol., Jan. 19, 2026 (Vol. 36, Issue 2)
Flexible use of a multi-purpose tool by a cow / Curr. Biol., Jan. 19, 2026 (Vol. 36, Issue 2) YouTube video by Cell Press

This must be one of the most wholesome videos I have ever seen, accompanying a publication πŸ„πŸ‚https://www.youtube.com/watch?v=SGVo41MuLiQ

19.01.2026 23:15 πŸ‘ 9 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0