This was a fantastic collaboration with W/
Zohar Rimon, Eli Shafer, Tal Tepper, and Efrat Shimron
Check it out, and come chat with us at #NeurIPS2025
This was a fantastic collaboration with W/
Zohar Rimon, Eli Shafer, Tal Tepper, and Efrat Shimron
Check it out, and come chat with us at #NeurIPS2025
Want to learn more? Our project page has data, code, a simulator, manufacturing recipes, and all the information you need to engage
zoharri.github.io/artificial-p...
Whatβs next?
Goal is a device that can visualize the internal structure of any soft object. This requires more data and more sophisticated models. Eventually, weβd like to see this work in clinical experiments - where we can use it to detect changes in a bodyβs shape over time.
The result is a neural network that can process a sequence of tactile measurements and output an image of the soft objectβs internal structure. Because it is learned - it can recover precise shapes that appeared in the data, like the round inclusions seen in the video.
Finally, for tactile imaging, we learn to map the representation to a GT image of the object structure. But how to obtain the GT internal structure?
We scanned our objects in an MRI!
So, we started from the beginning - data. We manufactured modular soft objects that we can automatically palpate for hours with a robot.
Our key idea - self-supervised learning.
By predicting tactile sequence -> forces at a future position, we learn a tactile representation!
Thereβs been progress with tactile-based policies, but thereβs much more to tactile understanding - itβs part of our world model!
There's also progress with rigid objects, which are easy to simulate/visualize, but soft objects is still a mystery...
Think of the applications - artificial palpation, cooking robots, smart prosthesis, and many more domains are based on touch.
Our motivation in this work is breast tactile imaging.
But AI here is tricky - tactile data is hard to find!
Can we get AI to understand and visualize tactile scenes?
A short π§΅ about our new work
Human have **tactile scene understanding** - transforming touch signals to a mental representation of objects we manipulate. We learn this as children.
But what about AI?
youtu.be/D1VAWh3p_GU
I'm excited about scaling up robot learning! Weβve been scaling up data gen with RL in realistic sims generated from crowdsourced videos. Enables data collection far more cheaply than real world teleop. Importantly, data becomes *cheaper* with more environments and transfers to real robots! π§΅ (1/N)
Reminder:
How to share arXiv papers
They performed well enough to make us believe that we can seriously spend $$$ to train a model to generate images that look real.
StyleGAN was the first time "deep fakes" became a real thing.
I could never remember the git commands beyond commit/push/pull, but I find that chatGPT is very good at helping me with it whenever I need
Thanks. At a high level, I see search heuristics to be quite similar to value functions. In the book, we prove A* optimality by "reward shaping" Dijkstra's algorithm, which is another connection between search heuristics and values.
There's a fairly large RL Theory community.
It's 2040. ICLR rebuttal now lasts two years. Reviewer 2 still hasn't read your paper but has strong opinions about it
Completely agree.
Recently I got a review (for a journal) that was from a relevant expert who also signed his name at the end (he didn't need to). Made us take the review very very seriously, even though it was negative.
ahm ahm... we just posted this today :)
bsky.app/profile/aviv...
Yes, when I teach I also have a final "hand waving" class on deep RL where I show how to go from the textbook material to DQN, PPO, Alpha Go. Adding such comments is a good idea, thanks!
We don't really cover deep RL algorithms. There's a lot on Q learning, and the distance from what we cover to DQN is very small. Actually, it would be a good idea to add a remark on that, thanks!
π―
We hope you find it useful!
The book is still work in progress - weβd be grateful for comments, suggestions, omissions, and errors of any kind, at rlfoundationsbook@gmail.com
But for teaching RL, we wanted a book that is both rigorous (full proofs and analytical examples), covers what we feel is most relevant, and is easy enough for undergrad teaching.
The book is a focused one semester course for advanced undergrad/early grad that covers key topics in depth.
For teachers, we also have a 40+ page exam booklet on our website.
Why this book? β¨
There are several other excellent textbooks, including Sutton and Barto and Bertsekas and Tsitsiklis.
Want to learn / teach RL? β¨
Check out new book draft:
Reinforcement Learning - Foundationsβ¨sites.google.com/view/rlfound...
W/ Shie Mannor & Yishay Mansour
This is a rigorous first course in RL, based on our teaching at TAU CS and Technion ECE.
Some years ago, a director at a tech company told me:
""" I get so many applicants that if you don't have a NeurIPS paper (or equivalent), I won't even look at the application. """
It's a quite reasonable individual rational response, but the systemic effects lead to what we see now.
The "bad reviewer party" sounds much more fun tho
yes, this is a great paper!
Reviews are mostly worthless; a good AC is critical.