idea: take 2 or more images (bursts of images) and combine them
idea: capture multiple low-res (LR) images and fuse them into a single super-resolved (SR) image
LR must be sub-pixel shifted
example for 1 linescan
solve it as a minimization problem: normally well-conditioned for factors 2-3x
use highest gradient
Focal Stack Compositing for Depth of Field Control
what changes? exposure and depth of field –> extract depth!
Confocal Stereo [Hasinoff and Kutulakos 2007]
idea: intensity of in-focus point remains constant for varying aperture
Confocal Stereo [Hasinoff and Kutulakos 2007]
secondary, fast, noisy, low-res camera for motion PSF estimation
captured data
blur estimation
deconvolution vs tripod
captured, estimated, ground truth
same idea
super short, high ISO noisy exposure for motion PSF estimation
longer exposure with camera shake -> deblur
synthetic experiment
very good kernell estimation
another examples
look at the highight -> blur kernell
de-rigging
four pictures, four lighting directions, different shadows
Edge detection -> non-photorealistic rendering
Similar setup for photometric stereo
dynamic range: ratio between brightest and darkest value
what's the problem?
estimate curve response
estimate curve response
no need if we have RAW
use a color chart
estimate curve response
no need if we have RAW
use a color chart
don't have a color chart?
capture exposure, apply lookup table
individual exposure is radiance (X) * exposure time (t): $I_{lin_i} = t_i X$
radiance up to a scale... use a reference
Image Based Lighting
Light Probes
sun overexposed
foreground too dark
gamma correction
colors are washed out!
gamma in intensity only
intensity details lost
compute gradients, scale them, integrate (Poisson eq.)
results are quite similar
Durand's looks a bit better
a lot of work in Tone Mapping
most looking for "tone reproduction" : perceptually based
[Masia et al] “Evaluation of reverse tone mapping through varying exposure conditions”
Due: 9 November
focal stack: use highest gradient
HDR: