All the "you need to learn AI skills or you'll get left behind" things are patently nonsense. It's easy to use and only becomes easier to use over time. If there's skill it's in knowing what it does well and what is does poorly
All the "you need to learn AI skills or you'll get left behind" things are patently nonsense. It's easy to use and only becomes easier to use over time. If there's skill it's in knowing what it does well and what is does poorly
This is a really good point. While there are men and women on the 'sceptic' side of all these debates, I don't know of any women on the AI 'booster' side. It's really a guy thing
Looking forward to a busy #ICCV2025.
I will give three (very different) talks at workshops and tutorials, see info below.
We also present two papers, ACE-G and SCR Priors.
And it's the 10th (!) anniversary of the R6D workshop, which we co-organize.
#TTT3R: 3D Reconstruction as Test-Time Training
TTT3R offers a simple state update rule to enhance length generalization for #CUT3R β No fine-tuning required!
πPage: rover-xingyu.github.io/TTT3R
We rebuilt @taylorswift13βs "22" live at the 2013 Billboard Music Awards - in 3D!
A futuristic corridor inside a data center with rows of tall, blue-lit server racks on both sides. Text overlaid at the bottom reads "JUPITER Supercomputer: Europe enters the exascale supercomputing league." In the lower right corner, there is a logo of the European Commission.
π Europeβs first exascale supercomputer is here!
JUPITER, launched in Germany, is the EUβs most powerful system and fourth fastest worldwide.
100% powered by renewables, it has also ranked first in energy efficiency. It will boost AI, science, and climate research.
Read more - europa.eu/!vcWBqW
There is a lot to hate about the politics of the silicon valley right, but they do actually want to build stuff, and I would prefer if the left didn't cede "we should be able to build stuff" to the right.
People often use "smart" when they mean "wise" and I don't think it's too controversial to doubt the wisdom of some tech elites. Other than that I certainly agree with you.
I can't* fathom why the top picture, and not the bottom picture, is the standard diagram for an autoencoder.
The whole idea of an autoencoder is that you complete a round trip and seek cycle consistencyβwhy lay out the network linearly?
I love both.
Great video on the convergent evolution from hierarchical military command structures to cybernetics to centralized AI coordination across political ideologies:
www.youtube.com/watch?v=mayo...
I'd also welcome a Bayesian framing. I know Andrew Davison's group has done work on Gaussian belief propagation for SLAM factor graphs (gaussianbp.github.io) but other than that and arxiv.org/abs/1703.04977, I'm not aware of of much Bayesian (deep) learning in (3D) vision right now.
In general I think 3D vision would do well to take some inspiration from Bayesians. I guess these days they lost their glamour, but imo it's a very nice way of thinking that feels somewhat lost currently.
"It is beautiful. It is elegant. Does it work well in practice? Not really. This is often the caveat we face in research: the things that are beautiful don't work and the things that work are not beautiful." β Daniel Cremers
You follow him. Andrew Davison from Imperial College London.
"As roboticists and computer vision people [outside of big tech], do we have to just wait for the next foundation model?"
I share the frustration. It's disempowering when most major progress recently is downstream of "foundation models" that you don't have the compute or data to train yourself.
We're live on bluesky! bibliome.club is the platform for creating, collaborating on and sharing reading lists with your Bluesky network - open source and decentralised via ATProto.
Sort of, but DINOv3 also seems to (inadvertently?) point towards the limits of pure scaling.
x.com/chrisoffner3...
If you maximize cosine similarity, aren't you left with only a single dimension (i.e. scaling the vector norm) as CosSim-invariant "wiggle room" to encode geometric information that isn't also captured by the language?
Yes but that's an additional training objective beyond merely minimizing cosine similarity. You'd need to introduce something that ensures that pixel features don't just collapse to language semantics, via some auxiliary task, no?
It just seems to me that mapping pixels and language to highly similar internal representations means that you'll drop a lot of information that is not (or cannot) be accurately described by language.
If we try to perfectly reconstruct, e.g., a complex 3D mesh from a natural language description, we'll find that the two modalities operate on very different levels of precision and abstraction.
My concern is that language as a modality inherently biases the data towards coarser labels/concepts. You won't perfectly describe per-pixel normals and depth in natural language. Geometry is continuous and "raw", language is discrete and abstract.
Oh, interesting. I'll check that out!
Yay, DINOv3 is out!
SigLIP (VLMs) and DINO are two competing paradigms for image encoders.
My intuition is that joint vision-language modeling works great for semantic problems but may be too coarse for geometry problems like SfM or SLAM.
Most animals navigate 3D space perfectly without language.
What are the best resources to learn about VLMs? Papers, tutorials, courses, blog posts, whatever is good. I can read the Kimi-VL or GLM tech reports and follow the breadcrumbs but I'd appreciate any and all recommendations towards a useful VLM curriculum! π
The tiny hand of the market.
Ensuring the robots canβt take our jobs by teaching the robots functional programming
Yes to both. ππ¨