🐕RefRef: A Synthetic Dataset and Benchmark for
Reconstructing Refractive and Reflective Objects

Abstract

Modern 3D reconstruction and novel view synthesis approaches have demonstrated strong performance on scenes with opaque non-refractive objects. However, most assume straight light paths and therefore cannot properly handle refractive and reflective materials. Moreover, datasets specialized for these effects are limited, stymieing efforts to evaluate performance and develop suitable techniques.

In this work, we introduce a synthetic RefRef dataset and benchmark for reconstructing scenes with refractive and reflective objects from posed images. Our dataset has 50 such objects of varying complexity, from single-material convex shapes to multi-material non-convex shapes, each placed in three different background types, resulting in 150 scenes. We also propose an oracle method that, given the object geometry and refractive indices, calculates accurate light paths for neural rendering, and an approach based on this that avoids these assumptions. We benchmark these against several state-of-the-art methods and show that all methods lag significantly behind the oracle, highlighting the challenges of the task and dataset.

Method Teaser

Visual Comparisons

Oracle (Ours)
TNSR
Oracle (Ours)
MS-NeRF
Oracle (Ours)
Zip-NeRF
Oracle (Ours)
Ray Deformation
R3F (Ours)
TNSR
R3F (Ours)
MS-NeRF
R3F (Ours)
Zip-NeRF
R3F (Ours)
Ray Deformation

Quantitative Results

BibTeX

@misc{yin2025refrefsyntheticdatasetbenchmark,
      title={RefRef: A Synthetic Dataset and Benchmark for Reconstructing Refractive and Reflective Objects}, 
      author={Yue Yin and Enze Tao and Weijian Deng and Dylan Campbell},
      year={2025},
      eprint={2505.05848},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.05848}, 
}