SceneComp @ ICCV 2025
Generative Scene Completion for Immersive Worlds
Morning Session on Monday, October 20 — Room 301 B
This workshop focuses on generative scene completion, which is indispensable for world models, VR/AR, telepresence, autonomous driving, and robotics. It explores how generative models can help reconstruct photorealistic 3D environments from sparse or partial input data by filling in occluded or unseen spaces. Topics include world models, generative models, inpainting, artifact removal, uncertainty, controllability, and handling of casual data. We will discuss how related directions like text-to-3D and single-image-to-3D compare with scene completion, where more input constraints must be satisfied. The workshop highlights key challenges and recent progress in transforming incomplete real-world captures into immersive environments.
Open our interactive explainer →
Schedule
Click the icon to add a talk to your calendar.
| (Hawaii 🏖️) Time | Speaker |
|---|---|
| 08:50 – 09:00 | Opening remark |
| 09:00 – 09:35 | Peter Kontschieder |
| 09:35 – 10:10 | Angela Dai |
| 10:10 – 10:45 | Aleksander Hołyński |
| 10:45 — 10:50 | Coffee Break |
| 10:50 – 11:25 | Varun Jampani |
| 11:25 – 12:00 | Andrea Tagliasacchi |
| 12:00 – 12:35 | Zan Gojcic |
| 12:35 – 12:40 | Closing remark |
Speakers
Peter Kontschieder
Meta
Reality
Labs
Angela Dai
TU Munich
Aleksander Hołyński
Google
DeepMind
Varun Jampani
Arcade AI
Zan Gojcic
NVIDIA Zurich
Andrea Tagliasacchi
SFU /
Google
DeepMind
Organizers
Ethan Weber
Meta
Reality
Labs
Hong-Xing “Koven” Yu
Stanford
Lily Goli
University
of
Toronto
Alex Trevithick
NVIDIA
Angjoo Kanazawa
UC Berkeley
Jiajun Wu
Stanford
Norman MĂĽller
Meta
Reality
Labs
Christian Richardt
Meta
Reality
Labs