|
|||||||
Abstract
We present a GPU-based real-time rendering method that simulates a high-quality depth-of-field blur, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.
Paper preprints, slides, additional videos, GitHub, and Google Scholar
* Copyright Disclaimer: paper preprints in this page are provided only for personal academic uses, and not for redistribution.
Bibliography
@article{lee09:msdof,
title={{Depth-of-Field Rendering with Multiview Synthesis}},
author={Sungkil Lee and Elmar Eisemann and Hans-Peter Seidel},
journal={{ACM Trans. Graphics (Proc. SIGGRAPH ASIA'09)}},
volume={28},
number={5},
pages={134:1--6},
year={2009}
}
|
|||||||
|