Ziyu Chen

I work closely with Prof. Yue Wang, and spent a wonderful time as a visiting student in his lab GVL at University of Southern California. I am also fortunate to collaborate closely with Prof. Marco Pavone at Stanford, and Dr. Ge Yang at MIT. Previously, I completed my Bachelor and Master studies at Shanghai Jiao Tong University.

My research interests lies in the intersection of vision and robotics.

Email  /  Github  /  Google Scholar  /  Twitter

profile photo
  News
    Feb 2025 OmniRe is accepted to ICLR 2025 as Spotlight 🌟 Shout out to Prof. Yue and all my collaborators! OCT 2024 Talk at MIT Visual Computing Seminar, visited beautiful Boston! 🚣 AUG 2024 Released DriveStudio, a 3DGS system for driving reconstruction/simulation! 🚗 SEP 2023 Joined Geometry, Vision, and Learning Lab as a research intern, advised by Prof. Yue Wang! SEP 2023 Awarded Chinese National Scholarship, thanks to my advisor Prof. Li Song!
  Publications

 OmniRe: Omni Urban Scene Reconstruction 
Ziyu Chen, Jiawei Yang, Jiahui Huang, Riccardo de Lutio, Janick Martinez Esturo, Boris Ivanovic, Or Litany, Zan Gojcic, Sanja Fidler, Marco Pavone, Li Song, Yue Wang
ICLR 2025 (Spotlight)

Website | Abstract | Paper | Review | Code

360-Degree Panorama Generation from Few Unregistered NFoV Images
Jionghao Wang*, Ziyu Chen*, Jun Ling, Rong Xie, Li Song
(* equal contribution)
ACM Multimedia 2023

Paper | Abstract | Code

L-Tracing Image

L-Tracing: Fast Light Visibility Estimation on Neural Surfaces by Sphere Tracing
Ziyu Chen, Chenjing Ding, Jianfei Guo, Dongliang Wang, Yikang Li, Xuan Xiao, Wei Wu, Li Song
ECCV 2022

Paper | Abstract | Code

  Open-source

A 3DGS codebase for dynamic urban scene reconstruction/simulation, supporting multiple popular driving datasets, including: Waymo, PandaSet, ArgoVerse2, KITTI, NuScenes and NuPlan. It also provides different types of Gaussian representations for reconstructing rigid (Vehicles) and non-rigid individuals (Pedestrians, Cyclists, etc.).

Neverwhere: Hyper-Realistic Visual Locomotion Benchmark
Leading Developer & Core Contributor
Collaboration with researchers from MIT, USC and UCSD

Document | Code

A toolchain and benchmark suite for hyper-realistic visual locomotion, building high-fidelity digital twins of real-world environments for closed-loop evaluation.

  Recent Talks
  • OmniRe: Omni Urban Scene Reconstruction
    • MIT Visual Computing Seminar, Oct, 2024
    • Peking University, Oct, 2024
    • LiAuto, Sept, 2024
  Honors & Awards
  • National Scholarship (Highest Honor, 2 out of 200+), 2023
  • Zhiyuan Honor Scholarship (Elite Undergraduate Program, top 5%), SJTU, 2018-2022

Website designed based on this template