Mert Özer's Avatar

Mert Özer

@mert-o

73
Followers
13
Following
7
Posts
12.11.2024
Joined
Posts Following

Latest posts by Mert Özer @mert-o

Post image Post image

Had a great experience presenting our work on 3D scene reconstruction from a single image with @visionbernie.bsky.social at #3DV2025 🇸🇬

andreeadogaru.github.io/Gen3DSR

Reach out if you're interested in discussing our research or exploring international postdoc opportunities @fau.de

26.03.2025 02:27 👍 18 🔁 4 💬 0 📌 1
Post image

Happy to share our latest 3D generative breast model: the *implicit* RBSM, or iRBSM for short. As opposed to its PCA-based predecessor, the iRBSM leverages implicit neural representations, yielding a highly detailed and expressive 3D breast model.

Paper: arxiv.org/abs/2412.13244

19.12.2024 11:23 👍 10 🔁 3 💬 1 📌 0
Video thumbnail

I am very excited to announce our paper “DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields” has been accepted to #IEEE RA-L.

Paper: ieeexplore.ieee.org/document/107...
Project page: hannahhaensen.github.io/DynaMoN/
Code: github.com/HannahHaense...

09.12.2024 11:39 👍 10 🔁 3 💬 1 📌 1
Die Grafik zeigt die so genannte "Autarkie", d.h. den Anteil am selbst genutzten Strom aus der PV-Anlage am gesamten Strombedarf.

Die Grafik zeigt die so genannte "Autarkie", d.h. den Anteil am selbst genutzten Strom aus der PV-Anlage am gesamten Strombedarf.

Das Potenzial der Photovoltaik ist ziemlich hoch. Das gilt vor allem für das Sommerhalbjahr. Aber auch in der Übergangszeit trägt PV sehr viel zur Energieversorgung bei.
Die aktuelle Grafik am Beispiel unseres Hauses
Im November noch 50%.

30.11.2024 21:46 👍 101 🔁 12 💬 6 📌 0
Micronaut: The fine art of microscopy by science photographer Martin Oeggerli | The fine art of microscopy by science photographer Martin Oeggerli

Credits to Takuma Nishimura and Martin Oeggerli (Micronaut) | Find more on: www.micronaut.ch

29.11.2024 14:46 👍 5 🔁 1 💬 0 📌 0
Video thumbnail

Scanning Electron Microscopes analyze invisible surfaces. However, they’re only able to take grayscale images. Manual coloring is a cumbersome process and that’s why FAU researchers are using the 3D structure to propagate one colorized view to a whole scene. Impressive! 🎨

Artwork by Micronaut.

29.11.2024 14:44 👍 21 🔁 4 💬 1 📌 0
Man sieht im Hintergrund ein Hochhaus. Vorne ein Steiger mit zwei Menschen, die an einer Straßenlaterne arbeiten

Man sieht im Hintergrund ein Hochhaus. Vorne ein Steiger mit zwei Menschen, die an einer Straßenlaterne arbeiten

Wir müssen die Strassen-Laternen auf LED umrüsten. Der jährliche Strombedarf ist höher, als das Leuchtmittel. Eine solche Investition kostet zwar, aber die zukünftigen Haushalte der Kommunen werden unnötig belastet - Nicht LED-Lampen sind unwirtschaftlich.

28.11.2024 12:13 👍 95 🔁 9 💬 6 📌 1

My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y

19.11.2024 23:00 👍 131 🔁 42 💬 88 📌 9
Solarmodule auf einem Flachdach

Solarmodule auf einem Flachdach

Solarmodule auf einem Flachdach

Solarmodule auf einem Flachdach

Solar. Mehr Solar!

26.11.2024 23:37 👍 89 🔁 6 💬 4 📌 0
Post image

We handle occlusions by employing amodal completion for each instance. The completed instance is then reconstructed using existing models that perform well for single objects. However, we first address the object crop domain shift (e.g., focal length) through reprojection. (4/5)

19.11.2024 21:52 👍 3 🔁 1 💬 1 📌 0
Post image

First, we parse the image of the scene by identifying the composing entities and estimating the depth and camera parameters. Each instance is then processed individually. The unprojected depth serves as a layout reference for composing the scene in 3D space. (3/5)

19.11.2024 21:52 👍 3 🔁 1 💬 1 📌 0

Most single-image scene-level reconstruction methods require 3D supervised end-to-end training and suffer from poor generalization capabilities. We propose a modular approach where each component performs well by focusing on specific tasks that are easier to supervise. (2/5)

19.11.2024 21:52 👍 2 🔁 1 💬 1 📌 0
Video thumbnail

Excited to share our paper which will be presented at #3DV2025

✨ Gen3DSR: Generalizable 3D Scene Reconstruction via Divide and Conquer from a Single View ✨
🌐 Project page: andreeadogaru.github.io/Gen3DSR
📄 Paper: arxiv.org/abs/2404.03421
👩‍💻 Code: github.com/AndreeaDogar...
(1/5)

19.11.2024 21:52 👍 23 🔁 6 💬 1 📌 0

(3/3) As for colorization, we use color images manually colorized by artist Martin Oeggerli, we project colors onto 3D space with estimated depths and take the colors to create supervision, and also use feature loss employed by Ref-NPR to estimate invisible areas of input colors.

13.11.2024 22:35 👍 2 🔁 1 💬 0 📌 0

(2/3) Our work utilizes Scanning Electron Microscopy (SEM) images of pollen. Two stages: grayscale novel view synthesis and colorization. The grayscale scene is represented by 2DGS, where poses are estimated using perspective projection with exceptionally long focal lengths.

13.11.2024 22:34 👍 2 🔁 1 💬 1 📌 0
Video thumbnail

Thrilled to share our work: 𝐀𝐫𝐂𝐒𝐄𝐌: Artistic Colorization of SEM Images via Gaussian Splatting
Novel view synthesis of scanning electron microscopy images and Conditional colorization.
📝 arXiv: arxiv.org/abs/2410.21310
🎨Project page: ronly2460.github.io/ArCSEM
(1/3)

13.11.2024 22:34 👍 14 🔁 7 💬 1 📌 0
Post image

🚨 Paper Alert 🚨 #CVIU
NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters has been accepted to #CVIU!!!
Many thanks to my co-authors! Shout out to Fabian Deuser, @visionbernie.bsky.social,Norbert Oswald and Daniel Roth.
(1/4)

13.11.2024 07:35 👍 7 🔁 3 💬 3 📌 0

Kudos to @mert-o.bsky.social, Maximilian Weiherer,
@mhundhausen.bsky.social , @visionbernie.bsky.social!
7/7

12.11.2024 20:04 👍 1 🔁 0 💬 0 📌 0

The idea to perform this research originated from X, where we saw that one of our colleagues had a thermal camera and we started to capture images of an initial dataset.
x.com/M_Hundhausen...
6/7

12.11.2024 20:02 👍 2 🔁 1 💬 2 📌 0
Post image

Since there is a lack of publicly available datasets containing multi-view, near-perfectly aligned RGB and thermal images, we share our collected dataset, called ThermalMix, here: zenodo.org/records/1106...
ThermalMix includes six common objects and a total of about 360 images.
5/7

12.11.2024 20:02 👍 1 🔁 0 💬 1 📌 0
Post image

A core challenge in building multi-sensory NeRFs is cross-modality calibration. We apply offline camera calibration prior to data capturing, leading to near-perfect alignments between images from different sensors. For thermal images, we chose a perforated aluminum plate.
4/7

12.11.2024 20:02 👍 2 🔁 1 💬 1 📌 0

(3) Adding a second branch to the color network to predict RGB-X values. (4) Adding an extra *network* to predict thermal/IR/depth values. For the latter, we restrict back-prop through the density network, preventing geometry from being influenced by the second modality.
3/7

12.11.2024 20:02 👍 2 🔁 0 💬 1 📌 0
Post image

We systematically compare four different strategies to learn multi-modal NeRFs from RGB + thermal, RGB + IR, and RGB + depth data: (1) Train from scratch on both modalities, leveraging camera poses computed from RGB images. (2) Pre-train on RGB, fine-tune on second modality.
2/7

12.11.2024 20:02 👍 2 🔁 0 💬 1 📌 0
Video thumbnail

How can we learn a multi-modal neural radiance field? What’s the best way to integrate images from a second modality, other than RGB, into NeRF? Check out our new paper!
Project page: mert-o.github.io/ThermalNeRF/
Paper: arxiv.org/abs/2403.11865
Dataset: zenodo.org/records/1106...
1/7

12.11.2024 20:02 👍 15 🔁 6 💬 1 📌 1