In computer graphics, there are three primary research areas: modeling, rendering, and animation, which is generally accepted by graphics practitioners. Modeling deals with the specification of shape and appearance in a way that can be represented mathematically and be stored on the computer. Rendering deals with the creation of shaded images, focusing on interactions between lights and 3D geometric models. Animation creates an illusion of motion through the sequence of images.
R4DL: Synthetic Data Generation for Deep Learning
DL4R: Neural Rendering and View SynthesisDL for rendering, including neural rendering, opens another possibility. Unlike what it sounds like, rendering and the mechanism of DL are contradicting, because many phenomena in the rendering are computationally predictable with explicit models. Instead, we now understand DL as a compact nonlinear modeler of implicit representations for many rendering problems. We explore many possibilities in how it can be effectively utilized for rendering and modeling.
View synthesis is one of the common goals in neural rendering. We tackle the view synthesis with visibility-driven effective scene rasterization, and also expand it with implicit neural representations.
Real-Time GPU Rendering and Optics
Real-time global illumination for virtual reality (VR) and augmented reality (AR) requires to be computed with hard real-time constraints, usually higher than 60 frames per sec. We are trying to develop efficient techniques to achieve visually plausible and temporally coherent appearances. In particular, volume-based approximation of global illumination techniques are improved. Also, the global illumination techniques for pure VR are extended to AR with efficient acquisition of scene geometry, light sources, and materials from input video streams.
Our former studies on lens blur effects, lens flare, and optical ray tracing were successful attempts to prove our belief. Creative combinations of rasterization and ray tracing allowed us to achieve real-time performance and high image quality at the same time. We are still seeking for creative solutions to many open rendering problems.
Display Algorithms for VRMetaverse is one of the actively ongoing multidisciplinary subject areas. We attempt to investigate technological aspects of Metaverse, including modeling of virtual avatar, virtual environments, display algorithms, visual fatigue, and VR hardware. Many possibilities are open now, and we try to build new techniques on the basis of our rendering and modeling algorithms.
Stereoscopic (binocular) display needs to be employed to mediate interactive VR/AR experiences. Such display devices still incur visual fatigues in many optical and perceptual aspects. To cope with these problems, we are investigating how to improve optical accuracy of VR display in terms of motion blur and optical aberrations.
As 4K and 8K displays become popular, the traditional raster algorithm/pipelines may potentially encounter a bottleneck in the pixel processing. Processing in a native resolution may not be optimal in the near future. To this end, we are investigating how to design a novel pipeline with resolution-independent G-buffers, which encodes geometry and shading information in much lower data-space complexity and reconstructs at a higher resolution without precision loss.
GPU AlgorithmsRendering usually handles a gigantic amount of data. To facilitate rendering, graphics hardware has been rapidly evolving the recent decades. One of the important advances is a user-programmable rendering pipeline. Accordingly, the capability of GPU expands beyond the traditional usage to encompass general-purpose computing. We attempt to achieve improved performance in general computing up to order of two magnitudes. Such an approach is focused on creative algorithms rather than a simple use of GPU and CUDA/OpenCL.
* Associated grants: NRF Korea (2012R1A2A2A01045719, 2015R1A2A2A01003783)