High-Resolution Relightable Buildings From Photographs
This work proposes a complete image-based process that facilitates recovery of both gross scale geometry and local surface structure to create highly detailed 3D models of building façades from photographs. We approximate both albedo and sufficient local geometric structure to compute complex self-shadowing effects, and fuse this with a gross scale 3D model. Our approach yields a perceptually high-quality model, imparting the illusion of measured reflectance.
The requirements of our approach are that image capture must be performed under diffuse lighting and surfaces in the images must be predominantly Lambertian. Exemplars of materials are obtained through surface depth hallucination, and our novel method matches these with multi-view image sequences that are also used to automatically recover 3D geometry.
Relightable Buildings from Images
Francho Melendez, Mashhuda Glencross, Gregory J. Ward and Roger Hubbold: ‘Relightable Buildings from Images.’ in Eurographics: Special Area on Cultural Heritage, Llandudno, April 2011.
Input Data Relit model under novel lighting conditions
Introduction
We present an quasi-automatic image-based building reconstructionsystem that recovers fully relightable models by approximatingboth albedo and sufficient textured surface detail to reproduce complex self-shadowing effects. Albedo and surface geometric detail are recovered though an exemplar-based transfer approach. We focus on a simple data capture, inexpensive equipment and automatic processes. This system provides perceptually high-quality models approximating well the visual appearance of the relit building.
System Structure
Employing a standard uncalibrated digital SLR camera, we capture two types of image data under diffuse lighting conditions as input to our model recovery pipeline; a hand-held wide-baseline sequence, and textured material exemplar image data consisting of a flash and a no-flash fronto-parallel view of a representative material. The wide-base line sequence is used to recover a low-resolution model that recovers the global structure of the building and to reconstruct a texture map. Exemplars capture the material properties in accessible areas. We transfer the albedo and high-frequency geometric detail from the exemplars to the full model. The system pipeline is divided in the following steps:
Gross-scale geometry: We use Structure from Motion techniques to recover the global structure using the wide-baseline sequence.
Multi-view texture mosaicing: An automatic texture reconstruction algorithm uses ¡emphMarkov Random Fields to optimally combine the multi-view data into a combined texture map.
Material Exemplars: Flash/No-flash image pairs are used as samples of the different materials present in the building fac¸ade. Surface depth hallucination [Glencross et al. 2008] estimates albedo and surface detail for these exemplars.
Albedo and Surface Detail Transfer: We transfer surface detail and albedo from the exemplars to the texture mosaic using Histogram Matching resulting in an per-texel depth map.
Geometry Fusion: A frequency-based method is used to combine both geometries resulting in a high-resolution model.
The process is automatic apart from some cleaning of the gross scale model and a user guided segmentation process of the texture map to assign exemplars with different regions in the model.
Exemplar Based Transfer Approach
The main contribution of this work is the idea of approximating the material properties and surface detail of a model by transferring them from a series of exemplars. This can be potentially included in any reconstruction pipeline to add high-frequency detail and albedo, transferred from exemplars to the recovered texture.
Results and Conclusion
Figure 2 shows the plausible appearance recovered with our system by comparing side-by-side a photograph with a rendering of the model under approximately matched lighting conditions.
This system provides a low-cost way to acquire highly detailed relightable models though a simple capture process and modest user interaction, appropriate for a range of visualization and entertainment applications. These models represent a great improvement over the traditional low resolution plus texture models, and allows us to reproduce self-shadowing effects which have been shown to be important to provide perceptual plausibility.
You can find the complete paper and the video in the download area.
References
GLENCROSS, M., WARD, G. J., JAY, C., LIU, J., MELENDEZ, F., AND HUBBOLD, R. 2008. A perceptually validated model for surface depth hallucination. ACM SIGGRAPH 27, 3, 59:1 – 59:8.