Georg Bökman's Avatar

Georg Bökman

@bokmangeorg

Geometric deep learning + Computer vision

989
Followers
733
Following
315
Posts
16.11.2024
Joined
Posts Following

Latest posts by Georg Bökman @bokmangeorg

Enrollment form strikes again. (OpenReview -> Tasks -> ECCV -> Author Enrollment Form)

26.02.2026 19:26 👍 4 🔁 0 💬 1 📌 0

I hate the ECCV template so much. Why can't it just die? Nobody prints proceedings into LNCS volumes anymore.

My favorite templates:

1. IEEE/CVPR
2. NeurIPS
...
...
39,2312,321. Edging tikz commands in morse code onto the skin of a dead walrus
...
...
Springer/ECCV

22.02.2026 22:51 👍 47 🔁 5 💬 4 📌 0
CVPR 2025 Open Access Repository

Are they public? Can't find any openaccess.thecvf.com/content/CVPR...

08.02.2026 07:02 👍 1 🔁 0 💬 1 📌 0

I think making the reviews and discussions public, at least for accepted papers, would be a good first step.

06.02.2026 07:05 👍 1 🔁 0 💬 1 📌 0

Sounds like a good approach!

06.02.2026 07:04 👍 0 🔁 0 💬 0 📌 0

If the reviewers were interested in the papers, they would be interested in discussing them. This is the main problem in my opinion. :)

05.02.2026 08:38 👍 3 🔁 0 💬 1 📌 1

Pretty similar to how the discussion phase at ICLR played out. Maybe 1/4 reviewers actually do their job.

05.02.2026 08:14 👍 3 🔁 0 💬 1 📌 0

I'm reviewing four papers. Three of them have a rebuttal. I wrote down my thoughts regarding the rebuttal for each. Across the three papers, one single (out of six) other reviewer has responded to the rebuttal.

05.02.2026 08:10 👍 6 🔁 1 💬 3 📌 0

What if position encodings were designed for vision from scratch? We introduce PaPE—Parabolic Position Encoding. Outperforms RoPE on 7/8 datasets and extrapolates to higher resolutions without fine-tuning or position interpolation. Paper, code, and website in thread 🧵

04.02.2026 08:22 👍 36 🔁 7 💬 3 📌 0
Preview
Recurrent Equivariant Constraint Modulation: Learning Per-Layer Symmetry Relaxation from Data Equivariant neural networks exploit underlying task symmetries to improve generalization, but strict equivariance constraints can induce more complex optimization dynamics that can hinder learning. Pr...

Our recent work looking into the desgining flexible approximately equivariant NNs. arxiv.org/abs/2602.02853 This is w/ Stefanos Pertigkiozoglou (stefanospert.github.io), Mircea Petrache, Kostas Daniilidis. Builds upon our work from the 2024 edition of NeurIPS proceedings.neurips.cc/paper_files/...

04.02.2026 03:50 👍 5 🔁 2 💬 0 📌 1
Post image

The #ECCV2026 Malmo 🇸🇪 call for papers is now available. Check it out!

Call for Papers: eccv.ecva.net/Conferences/...

30.01.2026 11:36 👍 25 🔁 7 💬 0 📌 1

Looking forward to seeing the submission counts at ECCV and NeurIPS.

28.01.2026 07:36 👍 11 🔁 0 💬 1 📌 0

A chance to join us as postdoc in Gothenburg to workon this :) www.chalmers.se/en/about-cha...

20.01.2026 10:46 👍 4 🔁 1 💬 0 📌 0
PrimePage Primes: 872989^131072 - 872989^65536 + 1 This page contains information about a single prime (discoverer, verification data, submission dates...). This page is about the prime 872989^131072-872989^65536+1.

The culprit: t5k.org/primes/page....

20.01.2026 12:14 👍 0 🔁 0 💬 0 📌 0
Post image

Funny failure mode

20.01.2026 12:13 👍 3 🔁 0 💬 1 📌 0

Why you should probe more than just the final layer of your Vision Transformer to maximize performance. 🧵👇

19.01.2026 09:44 👍 16 🔁 5 💬 1 📌 2

Something @eugenevinitsky.bsky.social and I are very curious about... how can we make our client (a version of Bluesky for researchers) more friendly to grad students? What would encourage you all to post more?

12.01.2026 22:35 👍 38 🔁 5 💬 16 📌 3

Guess who wrote those submissions 🤐

12.01.2026 15:16 👍 6 🔁 0 💬 1 📌 0

Another Erdos problem this morning:

(just to respond to a few people-- the system does NOT work by trying every possible answer and then checking. There's not enough matter and energy in the universe to solve theorems by trying every possible combination of symbols or whatever)

11.01.2026 11:54 👍 127 🔁 12 💬 3 📌 2

Fun! Some meshuggah/opeth/etc have a similar disorienting feeling but I am not sure if they are ever explicit twelve-tone.

11.01.2026 07:40 👍 1 🔁 0 💬 0 📌 0
Video thumbnail

More rock riffs should be twelve-tone tbh

10.01.2026 14:37 👍 3 🔁 0 💬 1 📌 0

Power chords are even cooler in 5-part choir harmonization of a twelve tone row (from 2:17 open.spotify.com/track/7uezPJ... )

10.01.2026 08:35 👍 4 🔁 0 💬 1 📌 0

I liked the formulation "negative impact" here "Per the conference policy, reviewers who fail to complete their reviews risk having their submissions rejected. To avoid this negative impact on your submission, encourage your co-author(s) to submit their reviews"

09.01.2026 11:43 👍 4 🔁 0 💬 1 📌 0

Reviewing for CVPR is sadly very boring.

09.01.2026 10:58 👍 14 🔁 0 💬 1 📌 2
Post image

New blog post (on a shiny new ICML blog!): What's New in #ICML2026 Peer Review

Some highlights:
- Policies to combat thinly sliced contributions
- Cascading desk rejections for peer-review abuse
- Reviewer reciprocity
- New ways to support authors and reviewers

Post: blog.icml.cc/2026/01/08/w...

08.01.2026 17:26 👍 22 🔁 8 💬 1 📌 0

so what do you think ChatGPT will say when ten million people ask it who they should vote for next year

31.12.2025 13:44 👍 392 🔁 115 💬 5 📌 2

I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of LLM-generated errors that were not detected by the author, the paper should be immediately rejected. My reasoning: 1/ #ResearchIntegrity

28.12.2025 06:23 👍 115 🔁 28 💬 4 📌 6
Preview
No, it’s not The Incentives—it’s you There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and…

every claim that "the incentives" support or deter certain kinds of behavior is also a statement about what kinds of external signals the claimant views as rewards or penalties #linklog

25.12.2025 00:46 👍 31 🔁 7 💬 0 📌 1
Post image

On the unexplained similarity across networks

In behavior, order and weights, we keep seeing evidence that learning is more consistent than one might think.

A walk through occurrences, my thoughts and the open question, why?!
What's your hypothesis, missed papers and thoughts
🤖📈🧠 #AI

21.12.2025 10:45 👍 13 🔁 3 💬 2 📌 2
Post image

📢The second edition of ✨GRaM workshop✨ is here this time at #ICLR26.

🌟Submit your exciting works in Geometry-grounded representations.

We welcome submissions in multiple tracks i.e.
📄 Proceedings
📝extended abstract
👩‍🏫Tutorial/blogpost
as well as an exciting challenge!

18.12.2025 05:31 👍 12 🔁 6 💬 2 📌 2