AI Heap
Published on

Generative Multiview Relighting for 3D Reconstruction under Extreme Illumination Variation

arXiv:2412.15211 - [arXiv,PDF]
Authors
  • Name
    Hadi Alzayer
  • Name
    Philipp Henzler
  • Name
    Jonathan T. Barron
  • Name
    Jia-Bin Huang
  • Name
    Pratul P. Srinivasan
  • Name
    Dor Verbin
  • Affiliation
    Google
  • Affiliation
    University of Maryland, College Park
Reconstructing the geometry and appearance of objects from photographs taken in different environments is difficult as the illumination and therefore the object appearance vary across captured images. This is particularly challenging for more specular objects whose appearance strongly depends on the viewing direction. Some prior approaches model appearance variation across images using a per-image embedding vector, while others use physically-based rendering to recover the materials and per-image illumination. Such approaches fail at faithfully recovering view-dependent appearance given significant variation in input illumination and tend to produce mostly diffuse results. We present an approach that reconstructs objects from images taken under different illuminations by first relighting the images under a single reference illumination with a multiview relighting diffusion model and then reconstructing the object’s geometry and appearance with a radiance field architecture that is robust to the small remaining inconsistencies among the relit images. We validate our proposed approach on both synthetic and real datasets and demonstrate that it greatly outperforms existing techniques at reconstructing high-fidelity appearance from images taken under extreme illumination variation. Moreover, our approach is particularly effective at recovering view-dependent "shiny" appearance which cannot be reconstructed by prior methods.