Welcome to the Realistic Graphics and Imaging group in the Department of Computing at Imperial College London. We conduct research in realistic computer graphics spanning acquisition, modeling and rendering of real world materials, objects and scenes, as well as imaging for graphics and vision including computational photography and illumination. We are affiliated to the Visual Computing research theme within DOC.

Delicious Twitter Digg this StumbleUpon Facebook

Projects

  • Acquiring axially-symmetric transparent objects using single-view transmission imaging

    We propose a novel, practical solution for high quality reconstruction of axially-symmetric transparent objects such as glasses, tumblers, goblets, carafes, etc., using single-view transmission imaging of a few patterns emitted from a background LCD panel. Our approach employs inverse ray tracing to reconstruct both completely symmetric as well as more complex n-fold symmetric everyday transparent objects.

    [+] more
  • Acquiring Spatially Varying Appearance of Printed Holographic Surfaces

    We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orientation over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time.

    [+] more
  • Deep Polarization 3D Imaging

    We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues. We couple polarization imaging with deep learning to achieve high quality estimate of 3D object shape (surface normals and depth) and SVBRDF using single-view polarization imaging under frontal flash illumination.

    [+] more
  • Deep Shape and SVBRDF Estimation using Smartphone Multi-lens Imaging

    We present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D
    objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide-angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method.

    [+] more
  • Desktop-based High Quality Facial Capture

    We present a novel desktop-based system for high-quality facial capture including geometry and facial appearance. The proposed acquisition system is highly practical and scalable, consisting purely of commodity components. The setup consists of a set of displays for controlled illumination for reflectance capture, in conjunction with multiview acquisition of facial geometry.

    [+] more
  • Diffuse-Specular Separation using Binary Spherical Gradient Illumination

    We introduce a novel method for view-independent diffuse-specular separation of albedo and photometric normals without requiring polarization using binary spherical gradient illumination. The method does not impose restrictions on viewpoints and requires fewer photographs for multiview acquisition than polarized spherical
    gradient illumination.

    [+] more
  • Efficient surface diffraction renderings with Chebyshev approximations

    We propose an efficient method for reproducing diffraction colours on natural surfaces with complex nanostructures that can be represented as height-fields. Our method employs Chebyshev approximations to accurately model view-dependent iridescences for such a surface into its spectral bidirectional reflectance distribution function (BRDF). As main contribution, our method significantly reduces the runtime memory footprint from precomputed lookup tables without compromising photorealism.

    [+] more
  • High Quality Neural Relighting using Practical Zonal Illumination

    We present a method for high-quality image-based relighting using a practical limited zonal illumination field. We employ a set of desktop monitors to illuminate a subject from a near-hemispherical zone and record One-Light-At-A-Time (OLAT) images from multiple viewpoints. We further extrapolate sampling of incident illumination directions beyond the frontal coverage of the monitors by repeating OLAT captures with the subject rotation in relation to the capture setup. Finally, we train our proposed skip-assisted autoencoder and latent diffusion based generative method to learn a high-quality continuous representation of the reflectance function without requiring explicit alignment of the data captured from various viewpoints. This method enables smooth lighting animation for high-frequency reflectance functions and effectively manages to extend incident lighting beyond the practical capture setup’s illumination zone.

    [+] more
  • Image-Based Relighting using Room Lighting Basis

    We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage. We believe our approach can be applied for practical relighting applications with general studio lighting.

    [+] more
  • Mobile Surface Reflectometry

    We propose two novel setups for acquiring spatially varying surface reflectance properties of planar samples using mobile devices. Our first setup employs free-form handheld acquisition with the back camera-flash pair on a typical mobile device and is suitable for rough specular samples. Ours second setup, suitable for highly specular samples, employs the LCD panel on a tablet as an extended illumination source.

    [+] more