Research    Publications    Funding    Professor    People    Course

In computer graphics, there are three primary research areas: modeling, rendering, and animation, which is generally accepted by graphics practitioners. Modeling deals with the specification of shape and appearance in a way that can be represented mathematically and be stored on the computer. Rendering deals with the creation of shaded images, focusing on interactions between lights and 3D geometric models. Animation creates an illusion of motion through the sequence of images.

CGLab at SKKU particularly deals with rendering and its associated areas. Fundamental principles underlying the topics include physics, optics, GPU algorithms, and visual perception. Our recent research expands them with Deep Learning (DL) to explore many open-problem spaces, in which we usually cannot intuitively obtain a computational model. Detailed subjects are listed in what follows.


Real-Time GPU Rendering and Optics
Despite the strides made in graphics algorithms and hardware, real-time rendering of natural phenomena remains challenging. In general, we sacrifice quality for real-time performance, which approximates physics. However, we believe there are always creative possibilities to improve images as similar in quality to reference solutions, while maintaining interactive real-time performance. GPUs (Graphics processing units) considerably help us to realize our novel algorithms and data structures with high performance.

Real-time global illumination for virtual reality (VR) and augmented reality (AR) requires to be computed with hard real-time constraints, usually higher than 60 frames per sec. We are trying to develop efficient techniques to achieve visually plausible and temporally coherent appearances. In particular, volume-based approximation of global illumination techniques are improved. Also, the global illumination techniques for pure VR are extended to AR with efficient acquisition of scene geometry, light sources, and materials from input video streams.

Our former studies on lens blur effects, lens flare, and optical ray tracing were successful attempts to prove our belief. Creative combinations of rasterization and ray tracing allowed us to achieve real-time performance and high image quality at the same time. We are still seeking for creative solutions to many open rendering problems.

DL4R: Neural Rendering and View Synthesis
DL for rendering, including neural rendering, opens another possibility. Unlike what it sounds like, rendering and the mechanism of DL are contradicting, because many phenomena in the rendering are computationally predictable with explicit models. Instead, we now understand DL as a compact nonlinear modeler of implicit representations for many rendering problems. We explore many possibilities in how it can be effectively utilized for rendering and modeling.

View synthesis is one of the common goals in neural rendering. We tackle the view synthesis with visibility-driven effective scene rasterization, and also expand it with implicit neural representations.

R4DL: Synthetic Data for DL and Differentiable Rendering
Rendering for DL focuses on the generation of (labeled) images that can be fed into the network as input. This significantly helps to widen the application areas of DL, where we cannot easily attain the input data. To this end, we attempt to reduce the domain gap between the real and synthetic images, and divert realistic rendering towards imperfect real imagery (CG images are too ideal and clean for this purpose).

Differentiable rendering deals with inverse rendering problems, which can find or refine the input attributes of rendering, including geometry, materials, textures, and illumination. We focus on rasterization approach for differentiable rendering.

DL-Assisted Real-Time 3D Image Processing
Rendering usually handles a gigantic amount of data. To facilitate rendering, graphics hardware has been rapidly evolving the recent decades. One of the important advances is a user-programmable rendering/imaging pipeline. Accordingly, the capability of GPU expands beyond the traditional usage to encompass general-purpose computing. We attempt to achieve improved performance in real-time imaging pipeline and general computing up to order of two magnitudes. Such an approach is focused on creative algorithms rather than a simple use of GPU and CUDA/OpenCL.

Recently, DL is evolving to practical applications beyond in-lab proof-of-concepts. DL suggests solutions for many non-linear problems, but is still infeasible for real-time (e.g., faster than 60 Hz) high-resolution imaging. We develop practical GPU imaging pipelines, where DL solves a key problem and the remainder is accelerated by GPU rendering/computing. One example is DL-based depth estimation for 3D image generation, and its applications to warping and immersive imaging.


* Associated grants: NRF Korea (2012R1A2A2A01045719, 2015R1A2A2A01003783)
Display Algorithms and Modeling for VR and Metaverse
Metaverse is one of the actively ongoing multidisciplinary subject areas. We attempt to investigate technological aspects of Metaverse, including modeling of virtual avatar, virtual environments, display algorithms, visual fatigue, and VR hardware. Many possibilities are open now, and we try to build new techniques on the basis of our rendering and modeling algorithms.

Stereoscopic (binocular) display needs to be employed to mediate interactive VR/AR experiences. Such display devices still incur visual fatigues in many optical and perceptual aspects. To cope with these problems, we are investigating how to improve optical accuracy of VR display in terms of motion blur and optical aberrations.

As 4K and 8K displays become increasingly popular, the traditional raster algorithm/pipelines may potentially encounter a bottleneck in the pixel processing. Processing in a native resolution may not be optimal in the near future. To this end, we are investigating how to design a novel pipeline with resolution-independent G-buffers, which encodes geometry and shading information in much lower data-space complexity and reconstructs at a higher resolution without precision loss.
27336, College of Software, Sungkyunkwan University, Tel. +82 31-299-4917, Seobu-ro 2066, Jangan-gu, Suwon, 16419, South Korea
Campus map (how to reach CGLab)