Computer Game Piece

Video games provide an interesting framework of rules, actions, events, and user interaction for the exploration of music expression. In this paper we describe a set of computer games designed for group improvisation and controlled by playing musical instruments. The main contribution of our work is the two-way interaction between music and video games, as opposed to the more commonly explored one-way interaction. We investigate the different challenges involved, such as finding adequate game controlling events, provide enough expressive freedom to musicians, correct playing speed and game complexity, and different artistic expression forms. We also present the problems encountered, design considerations, and different proposed and tested solutions. The games developed in this project were used in a concert and a set of workshops.

Surface Depth Hallucination

Surface depth hallucination offers a simple fast way to acquire albedo and depth for textured surfaces that exhibit mostly Lambertian reflectance. We obtain depth estimates entirely in image space, and from a single view so there are no complications that arise from registering texture with the depth obtained.

The user simply takes two photos of a textured surface from an identical position parallel to the surface; one under diffuse lighting conditions as might be encountered on a cloudy day or in shadow, and the other with a flash (strobe). From these two images together with a flash calibration image, we estimate an albedo map. We also estimate a shading image primarily from the diffuse lit image capture. We develop a model that relates depth to shading specifically tailored for textured surfaces with relatively little overall depth disparity. By applying this relationship over multiple scales to our shading image, we arrive at a per pixel height field. Combining this height field with our albedo map gives us a surface model which may be lit with any novel lighting condition, and viewed from any direction. Provided we have a suitable exemplar model, our method can also work from a diffuse lit image alone by histogram matching it with the albedo and shading images of the exemplar model further simplifying our data capture process.

We validated our approach through experimental studies and found that users believed our recovered surfaces to be plausible. Further, users found it difficult to reliably identify our synthetically relit images as fakes. Details of our method, and the results of our validation are to be published in SIGGRAPH 2008

High-Resolution Relightable Buildings From Photographs

This work proposes a complete image-based process that facilitates recovery of both gross scale geometry and local surface structure to create highly detailed 3D models of building façades from photographs. We approximate both albedo and sufficient local geometric structure to compute complex self-shadowing effects, and fuse this with a gross scale 3D model. Our approach yields a perceptually high-quality model, imparting the illusion of measured reflectance.

The requirements of our approach are that image capture must be performed under diffuse lighting and surfaces in the images must be predominantly Lambertian. Exemplars of materials are obtained through surface depth hallucination, and our novel method matches these with multi-view image sequences that are also used to automatically recover 3D geometry.
Relightable Buildings from Images
Francho Melendez, Mashhuda Glencross, Gregory J. Ward and Roger Hubbold: ‘Relightable Buildings from Images.’ in Eurographics: Special Area on Cultural Heritage, Llandudno, April 2011.

 

talkteaserweb

Input Data Relit model under novel lighting conditions
Introduction

We present an quasi-automatic image-based building reconstructionsystem that recovers fully relightable models by approximatingboth albedo and sufficient textured surface detail to reproduce complex self-shadowing effects. Albedo and surface geometric detail are recovered though an exemplar-based transfer approach. We focus on a simple data capture, inexpensive equipment and automatic processes. This system provides perceptually high-quality models approximating well the visual appearance of the relit building.

System Structure

Employing a standard uncalibrated digital SLR camera, we capture two types of image data under diffuse lighting conditions as input to our model recovery pipeline; a hand-held wide-baseline sequence, and textured material exemplar image data consisting of a flash and a no-flash fronto-parallel view of a representative material. The wide-base line sequence is used to recover a low-resolution model that recovers the global structure of the building and to reconstruct a texture map. Exemplars capture the material properties in accessible areas. We transfer the albedo and high-frequency geometric detail from the exemplars to the full model. The system pipeline is divided in the following steps:

Gross-scale geometry: We use Structure from Motion techniques to recover the global structure using the wide-baseline sequence.
Multi-view texture mosaicing: An automatic texture reconstruction algorithm uses ¡emphMarkov Random Fields to optimally combine the multi-view data into a combined texture map.
Material Exemplars: Flash/No-flash image pairs are used as samples of the different materials present in the building fac¸ade. Surface depth hallucination [Glencross et al. 2008] estimates albedo and surface detail for these exemplars.
Albedo and Surface Detail Transfer: We transfer surface detail and albedo from the exemplars to the texture mosaic using Histogram Matching resulting in an per-texel depth map.
Geometry Fusion: A frequency-based method is used to combine both geometries resulting in a high-resolution model.

The process is automatic apart from some cleaning of the gross scale model and a user guided segmentation process of the texture map to assign exemplars with different regions in the model.

Exemplar Based Transfer Approach

The main contribution of this work is the idea of approximating the material properties and surface detail of a model by transferring them from a series of exemplars. This can be potentially included in any reconstruction pipeline to add high-frequency detail and albedo, transferred from exemplars to the recovered texture.

Results and Conclusion

comparisonweb

Figure 2 shows the plausible appearance recovered with our system by comparing side-by-side a photograph with a rendering of the model under approximately matched lighting conditions.

This system provides a low-cost way to acquire highly detailed relightable models though a simple capture process and modest user interaction, appropriate for a range of visualization and entertainment applications. These models represent a great improvement over the traditional low resolution plus texture models, and allows us to reproduce self-shadowing effects which have been shown to be important to provide perceptual plausibility.

You can find the complete paper and the video in the download area.

References

GLENCROSS, M., WARD, G. J., JAY, C., LIU, J., MELENDEZ, F., AND HUBBOLD, R. 2008. A perceptually validated model for surface depth hallucination. ACM SIGGRAPH 27, 3, 59:1 – 59:8.