Research    Publications    Funding    Professor    People    Course
   pub.korean
* Copyright Disclaimer: paper preprints in this page are provided only for personal academic uses, and not for redistribution.

Journal and Conference Papers

Joo Young Chun, Hyun-Jin Kim, Ji-Won Hur, Dooyoung Jung, Heon-Jeong Lee, Seung Pil Pack, Sungkil Lee, Gerard Kim, Chung-Yean Cho, Seung-Moo Lee, Hyeri Lee, Seungmoon Choi, Taesu Cheong, and Chul-Hyun Cho
JMIR Serious Games, 10(3), e38284, 2022.
Background: Social anxiety disorder (SAD) is a fear of social situations where a person anticipates being evaluated negatively. Changes in autonomic response patterns are related to the expression of anxiety symptoms. Virtual reality (VR) sickness can inhibit the VR experiences. Objective: This study predicts the severity of specific anxiety symptoms and VR sickness in patients with SAD using machine learning based on in-situ autonomic physiological signals (heart rate and galvanic skin response) during VR treatment sessions. Methods: This study had 32 participants with SAD taking part in six VR sessions. During each VR session, all participants’ heart rate and galvanic skin response were measured in real-time. We assessed specific anxiety symptoms using the Internalized Shame Scale (ISS), the post-event rumination scale (PERS), and VR sickness using the simulator sickness questionnaire (SSQ) during four VR sessions (#1, #2, #4 and #6). Logistic regression, random forest, and naive Bayes classification classified and predict the severity groups in the ISS, PERS, and SSQ subdomains based on in-situ autonomic physiological signal data. Results: The severity of social anxiety disorder was predicted with three machine learning models. According to the F1 score, the highest prediction performance among each domain for severity was as follows: The F1 score of the ISS mistake anxiety subdomain was 0.8421 using the logistic regression model, the PERS positive subdomain was 0.7619 using the naïve Bayes classifier, and the total VR sickness was 0.7059 using the random forest model. Conclusions: This study could predict specific anxiety symptoms and VR sickness during VR intervention by autonomic physiological signals alone in real-time. Machine learning models predict individuals' severe and non-severe psychological states based on in-situ physiological signal data during VR intervention for real-time interactive services. These models support the diagnosis of specific anxiety symptoms and VR sickness with minimal participant bias. Clinical Trial: CRIS Registration Number-KCT0003854.
(Submitted) May 27, 2022, (Accepted) July 21, 2022, (Published) Sep. 16, 2022
IF=3.364, PCTL=60.09. ISSN: 2291-9279, JMIR PUBLICATIONS, INC, Canada

Yuna Jeong, Seung Youp Baek, Yechan Seok, Gi Beom Lee, and Sungkil Lee
IEEE Trans. Visualization and Computer Graphics, 28(2), 1373–1384, 2022.
This article presents a real-time bokeh rendering technique that splats pre-computed sprites but takes dynamic visibilities and intrinsic appearances into account at runtime. To attain alias-free looks without excessive sampling on a lens, the visibilities of strong highlights are densely sampled using rasterization, while regular objects are sparsely sampled using conventional defocus-blur rendering. The intrinsic appearance is dynamically transformed from a precomputed look-up table, which encodes radial aberrations against image distances in a compact 2D texture. Our solution can render complex bokeh effects without undersampling artifacts in real time, and greatly improve the photorealism of defocus-blur rendering.
(Submitted) Mar. 16, (Revised) July 22, (Accepted) Aug. 2, 2020, (Online-published) Aug. 5, 2020, (Published) Feb. 1, 2022
IF=5.226, PCTL=0. ISSN: 1077-2626, IEEE Computer Society, USA

Gi Beom Lee, Moonsoo Jeong, Yechan Seok, and Sungkil Lee
Computer Graphics Forum (Proc. Eurographics'21), 40(2), 489–495, 2021.
Presented at Eurographics 2021, Vienna, Austria (Virtual Conference)
This paper presents a scalable online occlusion culling algorithm, which significantly improves the previous raster occlusion culling using object-level bounding volume hierarchy. Given occluders found with temporal coherence, we find and rasterize coarse groups of potential occludees in the hierarchy. Within the rasterized bounds, per-pixel ray casting tests fine-grained visibilities of every individual occludees. We further propose acceleration techniques including the read-back of counters for tightly-packed multidrawing and occluder filtering. Our solution requires only constant draw calls for batch occlusion tests, while avoiding costly iteration for hierarchy traversal. Our experiments prove our solution outperforms the existing solutions in terms of scalability, culling efficiency, and occlusion-query performance.
(Submitted) Oct. 5, (Conditionally accepted) Dec. 15, 2020, (Accepted) Feb. 8, 2021
IF=2.116, PCTL=65.278. ISSN: 0167-7055, Wiley Blackwell Publishing, England

Ji-Won Hur, Hyemin Shin, Dooyoung Jung, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, Chul-Hyun Cho
JMIR Mental Health, 8(4), e25731, 2021.
Background: Although it has been well demonstrated that the efficacy of VR therapies for social anxiety disorder (SAD) is comparable to traditional cognitive-behavioral therapy, little is known about the effect of VR on the pathological self-referential processes in SAD. Objective: This study aims to determine the changes in self-referential processing and their neural mechanisms following VR treatment. Methods: We obtained scans from 25 participants with a primary diagnosis of SAD. Then, the subjects received VR-based exposure treatment starting immediately after the baseline MRI scan and clinical assessments and continuing for six sessions. Eventually, 21 SAD subjects completed follow-up scans after the sixth session of VR therapy in which the subjects were asked to judge whether a series of words (positive, negative, neutral) was relevant to themselves. Twenty-two age-, sex-, and handedness-matched controls also underwent baseline clinical assessments and fMRI scans. Results: The whole-brain analysis revealed that compared with the controls, the SAD group had increased neural responses during positive self-referential processing in the medial temporal and frontal cortexes. This group also showed increased left insular activation and decreased right middle frontal gyrus activation during negative self-referential processing. After undergoing VR-based therapy, the subjects with SAD rated negative words as less relevant (P = .066) and positive words as more relevant (P = .064) to themselves at the postintervention session than at baseline. Their overall symptoms, as measured with the Social Phobia Scale (SPS) and Post-Event Rumination Scale (PERS), were reduced accordingly. We also found that these subjects displayed greater activity in a group of brain regions responsible for self-referential and autobiographical memory processes while viewing positive words at the postintervention fMRI scan. Compared with that at baseline, higher activation was found within broad somatosensory areas of the subjects with SAD during negative self-referential processing following VR therapy. Conclusions: The current fMRI findings reflect the enhanced physiological and cognitive processing of individuals with SAD in response to self-referential information. They also provide neural evidence of the effect of VR exposure therapy on social anxiety and self-derogation. Clinical Trial: CRIS Registration Number-KCT0003854
(Submitted) Nov. 13, 2020, (Accepted) Mar. 12, 2021, (Published) Apr. 14, 2021
IF=3.535, PCTL=70. ISSN: 2368-7959, JMIR Publications, Inc, Canada

Sun Geol Baek, Sungkil Lee, and Young Ik Eom
Information Sciences, 546, 1306-1327, 2021.
The single-pair all-shortest-path problem is to find all possible shortest paths, given a single source-destination pair in a graph. Due to the lack of efficient algorithms for single-pair all-shortest-path problem, many applications used diverse types of modifications to the existing shortest-path algorithms such as Dijkstra’s algorithm. Such approaches can facilitate the analysis of medium-sized static networks, but the heavy computational cost impedes their use for massive and dynamic real-world networks. In this paper, we present a novel single-pair all-shortest-path algorithm, which performs well on massive networks as well as dynamic networks. The efficiency of our algorithm stems from novel 2-hop label-based query processing on large-size networks. For dynamic networks, we also demonstrate how to incrementally maintain all shortest paths in 2-hop labels, which allows our algorithm to handle the topological changes of dynamic networks such as insertion or deletion of edges. We carried out experiments on real-world large datasets, and the results confirms the effectiveness of our algorithms for the single-pair all-shortest-path computation and the incremental maintenance of 2-hop labels.
(Accepted) Aug. 27, 2020, (Online Published) Sep. 23, 2020, (Published) Feb. 6, 2021
IF=5.91, PCTL=94.551. ISSN: 0020-0255, Elsevier Science Inc., USA

Hyun-Jin Kim, Seulki Lee, Dooyoung Jung, Ji-Won Hur, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, and Chul-Hyun Cho
J. Med. Internet Res, 22(10), e23024:1–16, 2020.
Background: Social anxiety disorder (SAD) is characterized by excessive fear of negative evaluation and humiliation in social interactions and situations. Virtual reality (VR) treatment is a promising intervention option for SAD. Objective: The purpose of this study was to create a participatory and interactive VR intervention for SAD. Treatment progress, including the severity of symptoms and the cognitive and emotional aspects of SAD, was analyzed to evaluate the effectiveness of the intervention. Methods: In total, 32 individuals with SAD and 34 healthy control participants were enrolled in the study through advertisements for online bulletin boards at universities. A VR intervention was designed consisting of three stages (introduction, core, and finishing) and three difficulty levels (easy, medium, and hard) that could be selected by the participants. The core stage was the exposure intervention in which participants engaged in social situations. The effectiveness of treatment was assessed through Beck Anxiety inventory (BAI), State‐Trait Anxiety Inventory (STAI), Internalized Shame Scale (ISS), Post-Event Rumination Scale (PERS), Social Phobia Scale (SPS), Social Interaction Anxiety Scale (SIAS), Brief-Fear of Negative Evaluation Scale (BFNE), and Liebowitz Social Anxiety Scale (LSAS). Results: In the SAD group, scores on the BAI (F=4.616, P=.009), STAI-Trait (F=4.670, P=.004), ISS (F=6.924, P=.001), PERS-negative (F=1.008, P<.001), SPS (F=8.456, P<.001), BFNE (F=6.117, P=.004), KSAD (F=13.259, P<.001), and LSAS (F=4.103, P=.009) significantly improved over the treatment process. Compared with the healthy control group before treatment, the SAD group showed significantly higher scores on all scales (P<.001), and these significant differences persisted even after treatment (P<.001). In the comparison between the VR treatment responder and nonresponder subgroups, there was no significant difference across the course of the VR session. Conclusions: These findings indicated that a participatory and interactive VR intervention had a significant effect on alleviation of the clinical symptoms of SAD, confirming the usefulness of VR for the treatment of SAD. VR treatment is expected to be one of various beneficial therapeutic approaches in the future. Trial Registration: Clinical Research Information Service (CRIS) KCT0003854.
(Submitted) July 31, (Accepted) Sep. 16, 2020, (Published) Oct. 6, 2020
IF=5.034, PCTL=95.016. ISSN: 1438-8871, JMIR Publications Inc., Canada

Young Im Kim, Seo-Yeon Jung, Seulki Min, Eunbi Seol, Sungho Seo, Ji-Won Hur, Dooyoung Jung, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, and Chul-Hyun Cho
Psychiatry Investig, 16(2), 167-171, 2019.
With proper guidance, virtual reality (VR) can provide psychiatric therapeutic strategies within a simulated environment. The visuo-haptic-based multimodal feedback VR solution has been developed to improve anxiety symptoms through immersive experience and feedback. A proof-of-concept study was performed to investigate this VR solution. Nine subjects recently diagnosed with panic disorder were recruited, and seven of them eventually completed the trial. Two VR sessions were provided to each subject. Depression, anxiety, and VR sickness were evaluated before and after each session. Although there was no significant effect of the VR sessions on psychiatric symptoms, we could observe a trend of improvement in depression, anxiety, and VR sickness. The VR solution was effective in relieving subjective anxiety, especially in panic disorder without comorbidity. VR sickness decreased over time. This study is a new proof-of-concept trial to evaluate the therapeutic effect of VR solutions on anxiety symptoms using visuo-haptic-based multimodal feedback simultaneously.
Feb. 21, 2019
IF=1.333, PCTL=25.752. ISSN: 1738-3684, Korean Neuropsychiatric Assoc, South Korea

Junyong Lee, Sungkil Lee, Sunghyun Cho, and Seungyong Lee
IEEE Conf. Computer Vision and Patt. Recog. (CVPR), 12222–12230, 2019.
In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.
June 16, 2019

Seungtaek Song, Namhyun Kim, Sungkil Lee, Joyce Jiyoung Whang, Jinkyu Lee
IEICE Trans. Fundamentals of Electronics, Comm. and Computer Science, E102-A(4), 668-671, 2019.
Smartphone users often want to customize the positions and functions of physical buttons to accommodate their own usage patterns; however, this is unfeasible for electronic mobile devices based on COTS (Commercial Off-The-Shelf) due to high production costs and hardware design constraints. In this letter, we present the design and implementation of customized virtual buttons that are localized using only common built-in sensors of electronic mobile devices. We develop sophisticated strategies firstly to detect when a user taps one of the virtual buttons, and secondly to locate the position of the tapped virtual button. The virtual-button scheme is implemented and demonstrated in a COTS-based smartphone. The feasibility study shows that, with up to nine virtual buttons on five different sides of the smartphone, the proposed virtual buttons can operate with greater than 90% accuracy.
Apr. 1, 2019
IF=0.368, PCTL=1.917. ISSN: 0916-8508, IEICE, Japan

Timothy R. Kol, Pablo Bauszat, Sungkil Lee, and Elmar Eisemann
Computer Graphics Forum, 38(1), 235–247, 2019.
Presented at Eurographics 2019, Genova, Italy.
We present a scalable solution to render complex scenes from a large amount of view points. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many-light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.
Mar. 16, 2019
IF=2.373, PCTL=68.692. ISSN: 0167-7055, Wiley Blackwell Publishing, England

Sungkil Lee, Younguk Kim, and Elmar Eisemann
ACM Trans. Graphics, 37(5), 177:1–13, 2018.
Presented at ACM SIGGRAPH 2019
This article presents an iterative backward-warping technique and its applications. It predictively synthesizes depth buffers for novel views. Our solution is based on a fixed-point iteration that converges quickly in practice. Unlike the previous techniques, our solution is a pure backward warping without using bidirectional sources. To efficiently seed the iterative process, we also propose a tight bounding method for motion vectors. Non-convergent depth holes are inpainted via deep depth buffers. Our solution works well with arbitrarily distributed motion vectors under moderate motions. Many scenarios can benefit from our depth warping. As an application, we propose a highly scalable image-based occlusion-culling technique, achieving a significant speedup compared to the state of the art. We also demonstrate the benefit of our solution in multi-view soft-shadow generation.
(submitted) Sep. 21, 2017, (accepted) Jul. 12, 2018, (published) Oct. 23, 2018
IF=4.384, PCTL=97.596. ISSN: 0730-0301, ACM, USA

Euijai Ahn, Sungkil Lee, and Gerard Jounghyun Kim
Springer Virtual Reality, 22(3), 245–262, 2018.
Augmented reality (AR) augments virtual information over the real-world medium and is emerging as an important type of an information visualization technique. As such, the visibility and readability of the augmented information must be as high as possible amidst the dynamically changing real-world surrounding and background. In this work, we present a technique based on image saliency analysis to improve the conspicuity of the foreground augmentation to the background real-world medium by adjusting the local brightness contrast. The proposed technique is implemented on a mobile platform considering the usage nature of AR. The saliency computation is carried out for the augmented object’s representative color rather than all the pixels, and searching and adjusting over only a discrete number of brightness levels to produce the highest contrast saliency, thereby making real-time computation possible. While the resulting imagery may not be optimal due to such a simplification, our tests showed that the visibility was still significantly improved without much difference to the optimal ground truth in terms of correctly perceiving and recognizing the augmented information. In addition, we also present another experiment that explores in what fashion the proposed algorithm can be applied in actual AR applications. The results suggested that the users clearly preferred the automatic contrast modulation upon large movements in the scenery.
Sep. 1, 2018
IF=1.375, PCTL=36.73. ISSN: 1359-4338, Springer, England

Martin Cadik, Daniel Sykora, and Sungkil Lee
Elsevier Computers & Graphics, 74, 109–118, 2018.
Image enhancement tasks can highly benefit from depth information, but the direct estimation of outdoor depth maps is difficult due to vast object distances. This paper presents a fully automatic framework for model-based synthesis of outdoor depth maps and its applications to image enhancements. We leverage 3D terrain models and camera pose estimation techniques to render approximate depth maps without resorting to manual alignment. Potential local misalignments, resulting from insufficient model details and rough registrations, are eliminated with our novel free-form warping. We first align synthetic depth edges with photo edges using the as-rigid-as-possible image registration and further refine the shape of the edges using the tight trimap-based alpha matting. The resulting synthetic depth maps are accurate, calibrated in the absolute distance. We demonstrate their benefit in image enhancement techniques including reblurring, depth-of-field simulation, haze removal, and guided texture synthesis.
August 1, 2018
IF=1.2, PCTL=46.635. ISSN: 0097-8493, Pergamon-Elsevier Science Ltd, England

Leonardo Scandolo, Sungkil Lee, and Elmar Eisemann
Computer Graphics Forum (Proc. EGSR'18), 37(4), 167–176, 2018.
Far-field diffraction can be evaluated using the Discrete Fourier Transform (DFT) in image space but it is costly due to its dense sampling. We propose a technique based on a closed-form solution of the continuous Fourier transform for simple vector primitives (quads) and propose a hierarchical and progressive evaluation to achieve real-time performance. Our method is able to simulate diffraction effects in optical systems and can handle varying visibility due to dynamic light sources. Furthermore, it seamlessly extends to near-field diffraction. We show the benefit of our solution in various applications, including realistic real-time glare and bloom rendering.
Jul. 1, 2018
IF=2.046, PCTL=79.327. ISSN: 0167-7055, Wiley Blackwell Publishing, England

Sun Geol Baek, Dong Hyun Kang, Sungkil Lee, Young Ik Eom
Journal of Systems and Software, 140, 17–31, 2018.
Abnormal messages propagated from faulty operations in a vehicular system may severely harm the system, but they cannot be easily detected when their information is not known in advance. To support an efficient detection of faulty message patterns propagated in the in-vehicle network, this paper presents a novel graph pattern matching framework built upon a message log-driven graph modeling. Our framework models the unknown condition as a query graph and the reference database of normal operations as data graphs. The analysis of the faulty message propagation requires to consider the sequence of events in the distance measure, and thus, the conventional graph distance measures cannot be directly used for our purpose. We hence propose a novel distance metric based on the maximum common subgraph (MCS) between two graphs and the sequence numbers of messages, which works robustly even for the abnormal faulty patterns and can avoid false negatives in large databases. Since the problem of MCS computation is NP-hard, we also propose two efficient filtering techniques, one based on the lower bound of MCS distance for a polynomial-time approximation and the other based on edge pruning. Experiments performed on real and synthetic datasets to assess our framework show that ours significantly outperforms the previously existing methods in terms both of performance and accuracy of query responses.
Feb. 24, 2018
IF=2.278, PCTL=81.154. ISSN: 0164-1212, Elsevier Science Inc, USA

This paper presents a scalable parser framework using graphics processing units (GPUs) for massive text-based files. Specifically, our solution is designed to efficiently parse Wavefront OBJ models texts of which specify 3D geometries and their topology. Our work bases its scalability and efficiency on chunk-based processing. The entire parsing problem is subdivided into subproblems the chunk of which can be processed independently and merged seamlessly. The within-chunk processing is made highly parallel, leveraged by GPUs. Our approach thereby overcomes the bottlenecks of the existing OBJ parsers. Experiments performed to assess the performance of our system showed that our solutions significantly outperform the existing CPU-based solutions and GPU-based solutions as well.
IF=0.878, PCTL=20.433. ISSN: 1000-9000, Springer, China

Jun Suk Kim, Sungkil Lee, Min Young Chung
Pervasive and Mobile Computing, 44, 45–57, 2018.
Cellular internet-of-things (CIoT) systems are recently developed by the third-generation partnership project (3GPP) to support internet-of-things (IoT) services over the conventional mobile-communication infrastructures. The CIoT systems allow a large number of IoT devices to be connected through the random-access procedure, but the concurrent accesses of the massive devices make this procedure heavily competitive. In this article, we present an effective time-division random-access scheme built upon the coverage levels (CLs), where each CIoT device is assigned a CL and categorized based on its radio-channel quality. In our scheme, the random-access loads of device groups having different CLs are distributed into different time periods, which greatly relaxes instantaneous contention and improves random-access performance. To assess the performance of our scheme, we also introduce a mathematical model that expresses and analyzes the states and behaviors of CIoT devices using the Markov chain. Mathematical analysis and simulation results show that our scheme significantly outperforms the conventional scheme (without time-division control) in terms of collision probability, succeeding access rate, and access-blocking probability.
Feb. 01, 2018
IF=2.974, PCTL=73.79. ISSN: 1574-1192, Elsevier, Netherlands

Soonhyeon Kwon, Younguk Kim, Kihyuk Kim, and Sungkil Lee
Computer Animation and Virtual Worlds, 29(1), e1784:1–14, 2018.
This paper presents a novel heterogeneous volume deformation technique and an intuitive volume animation authoring framework. Our volume deformation extends the previous technique based on moving least squares with a density-aware weighting metric for data-driven importance control and efficient upsampling-based volume synthesis. For user interaction, we present an intuitive visual metaphor and interaction schemes to support effective spatiotemporal editing of volume deformation animation. Our framework is implemented fully on graphics processors and thus suitable for quick-and-easy prototyping of volume deformation with improved controllability.
Feb. 6, 2018
IF=0.697, PCTL=12.981. ISSN: 1546-4261, Wiley, England

Jun Suk Kim, Sungkil Lee, Min Young Chung
IEEE Trans. Vehicular Technology, 66(7), 6280–6290, 2017.
In order to facilitate low-cost network connection of many devices, machine-type communication (MTC) has evolved to low-cost MTC (LC-MTC) in the third-generation partnership project (3GPP) standard. LC-MTC should be able to effectively handle intensive accesses through multiple narrow-band (NB) random-access channels (RACHs) assigned within the bandwidth of a long-term evolution (LTE) system. As the number of MTC devices and their congestion rapidly increase, the random-access scheme for LC-MTC RACH needs to be improved. This paper presents a novel random-access scheme that introduces virtual preambles of LC-MTC devices and associates them with RACH indices to effectively discern LC-MTC devices. In comparison to the sole use of preambles, our scheme allows an LC-MTC device to better choose a unique virtual preamble. Thereby, the probability of successful accesses of LC-MTC devices increases in contention-based random-access environments. We experimentally assessed our scheme and the results show that our scheme performs better than the existing preamble-based scheme in terms of collision probability, access delay, and access blocking probability.
July 1, 2017
IF=4.066, PCTL=89.003. ISSN: 0018-9545, IEEE, USA

Yunji Kang, Woohyun Joo, Sungkil Lee, Dongkun Shin
Journal of Systems Architecture, 76, 17–27, 2017.
Many visual tasks in modern personal devices such smartphones resort heavily to graphics processing units (GPUs) for their fluent user experiences. Because most GPUs for embedded systems are nonpreemptive by nature, it is important to schedule GPU resources efficiently across multiple GPU tasks. We present a novel spatial resource sharing (SRS) technique for GPU tasks, called a budget-reservation spatial resource sharing (BR-SRS) scheduling, which limits the number of GPU processing cores for a job based on the priority of the job. Such a priority-driven resource assignment can prevent a high-priority foreground GPU task from being delayed by background GPU tasks. The BR-SRS scheduler is invoked only twice at the arrival and completion of jobs, and thus, the scheduling overhead is minimized as well. We evaluated the performance of our scheduling scheme in an Android-based smartphone, and found that the proposed technique significantly improved the performance of high-priority tasks in comparison to the previous temporal budget-based multi-task scheduling.
May 5, 2017
IF=1.579, PCTL=52.585. ISSN: 1383-7621, Elsevier, Netherlands

Kihong Lee, DongWoo Lee, Sungkil Lee, and Young Ik Eom
Journal of Supercomputing, 73(4), 1307–1321, 2017.
A virtualized system generally suffers from low I/O performance, mainly caused by its inherent abstraction overhead and frequent CPU transitions between the guest and hypervisor modes. The recent research of polling-based I/O virtualization partly solved the problem, but excessive polling trades intensive CPU usage for higher performance. This article presents a power-efficient and high-performance block I/O framework for a virtual machine, which allows us to use it even with a limited number of CPU cores in mobile or embedded systems. Our framework monitors system status, and dynamically switches the I/O process mode between the exit and polling modes, depending on the amounts of current I/O requests and CPU utilization. It also dynamically controls the polling interval to reduce redundant polling. The highly dynamic nature of our framework leads to improvements in I/O performance with lower CPU usage as well. Our experiments showed that our framework outperformed the existing exit-based mechanisms by 10.8 % higher I/O throughput, maintaining similar CPU usage by only 3.1 % increment. In comparison to the systems solely based on the polling mechanism, ours reduced the CPU usage roughly down to 10.0 % with no or negligible performance loss.
March. 23, 2017
IF=1.326, PCTL=40.704. ISSN: 0920-8542, Springer, USA

Hyuntae Joo, Soonhyeon Kwon, Sangmin Lee, Elmar Eisemann, and Sungkil Lee
Computer Graphics Forum (Proc. EGSR'16), 35(4), 99–105, 2016.
We present an efficient ray-tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed-form solution of ray-surface intersections. We propose a numerical root-finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture-based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well-suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus-blur and lens-flare effects.
Jun. 22, 2016
IF=1.542, PCTL=84.434. ISSN: 0167-7055, Wiley Blackwell Publishing, England

Yuna Jeong, Sangmin Lee, Soonhyeon Kwon, and Sungkil Lee
The Visual Computer (Proc. CGI'16), 32(6), 1025–1034, 2016.
This article presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. Our model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. We further propose a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods.
June, 2016
IF=1.06, PCTL=53.302. ISSN: 0178-2789, Springer, Germany

Yuna Jeong, Hyuntae Joo, Gyeonghwan Hong, Dongkun Shin, and Sungkil Lee
IEEE Trans. Consumer Electronics, 61(3), 295–301, 2015.
Internet of things recently emerges as a common platform and service for consumer electronics. This paper presents an interactive framework of visualizing and authoring IoT in indoor environments such as home or small office. Building blocks of the framework are virtual sensors and actuators that abstract physical things and their virtual behaviors on top of their physical networks. Their behaviors are abstracted and programmed through visual authoring tools on the web, which allows a casual consumer to easily monitor and define their behaviors even without knowing the underlying physical connections. The user study performed to assess the usability of the visual authoring showed that the visual authoring is easy to use, understandable, and also preferred to typical text-based script programming
Aug., 2015
IF=1.045, PCTL=48.641. ISSN: 0098-3063, IEEE, USA

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
IEEE Trans. Haptics, 7(3), 394–404, 2014.
Tactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.
Sep. 17, 2014
IF=2.03, PCTL=81.25. ISSN: 1939-1412, IEEE Computer Society, USA

Sungkil Lee, Mike Sips, and Hans-Peter Seidel
IEEE Trans. Vis. and Computer Graphics, 19(10), 1746–1757, 2013.
Invited to and presented at IEEE InfoVIS 2013, Atlanta, GA
Visualization techniques often use color to present categorical differences to a user. When selecting a color palette, the perceptual qualities of color need careful consideration. Large coherent groups visually suppress smaller groups, and are often visually dominant in images. This article introduces the concept of class visibility used to quantitatively measure the utility of a color palette to present coherent categorical structure to the user. We present a color optimization algorithm based on our class visibility metric to make categorical differences clearly visible to the user. We performed two user experiments on user preference and visual search to validate our visibility measure over a range of color palettes. The results indicate that visibility is a robust measure, and our color optimization can increase the effectiveness of categorical data visualizations.
Oct., 2013
IF=1.898, PCTL=88.095. ISSN: 1077-2626, IEEE Computer Society, USA

Yuna Jeong, Kangtae Kim, and Sungkil Lee
Computer Graphics Forum, 32(6), 126–134, 2013.
Invited to and presented at Pacific Graphics 2014, Seoul, Korea.
This paper presents a GPU-based rendering algorithm for real-time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering.
Sep. 13, 2013
IF=1.638, PCTL=82.381. ISSN: 0167-7055, Wiley Blackwell Publishing, England

We present a practical real-time approach for rendering lens-flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first-order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens-flare-producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically-plausible images at high framerates on standard off-the-shelf graphics hardware.
Jul. 18, 2013
IF=1.638, PCTL=82.381. ISSN: 0167-7055, Wiley Blackwell Publishing, England

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
Proc. Eurohaptics, 258–269, 2012.
This paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.
June 13, 2012

Matthias Hullin, Elmar Eisemann, Hans-Peter Seidel, and Sungkil Lee.
ACM Trans. Graphics (Proc. SIGGRAPH'11), 30(4), 108:1–9, 2011.
Lens flare is caused by light passing through a photographic lens system in an unintended way. Often considered a degrading artifact, it has become a crucial component for realistic imagery and an artistic means that can even lead to an increased perceived brightness. So far, only costly offline processes allowed for convincing simulations of the complex light interactions. In this paper, we present a novel method to interactively compute physically-plausible flare renderings for photographic lenses. The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings. Various acceleration strategies allow for a performance/quality tradeoff, making our technique applicable both in real-time applications and in high-quality production rendering. We further outline artistic extensions to our system.
Jul. 25, 2011
IF=3.632, PCTL=97.475. ISSN: 0730-0301, ACM, USA

Sunghoon Yim, Sungkil Lee, and Seungmoon Choi
Interacting with Computers, 23(3), 268–278, 2011.
This article evaluates the usability of motion sensing-based interaction on a mobile platform using image browsing as a representative task. Three types of interfaces, a physical button interface, a motion-sensing interface using a high-precision commercial 3D motion tracker, and a motion-sensing interface using an in-house low-cost 3D motion tracker, are compared in terms of task performance and subjective preference. Participants were provided with prolonged training over 20 days, in order to compensate for the participants’ unfamiliarity with the motion-sensing interfaces. Experimental results showed that the participants’ task performance and subjective preference for the two motion-sensing interfaces were initially low, but they rapidly improved with training and soon approached the level of the button interface. Furthermore, a recall test, which was conducted 4 weeks later, demonstrated that the usability gains were well retained in spite of the long time gap between uses. Overall, these findings highlight the potential of motion-based interaction as an intuitive interface for mobile devices.
IF=1.192, PCTL=48.214. ISSN: 0953-5438, Oxford Univ Press, Netherlands

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH'10), 29(4), 65:1–7, 2010.
We present a novel rendering system for defocus-blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more general and can integrate advanced simulations, such as simple geometric lens models enabling various lens aberration effects. These latter are crucial for realism, but are often employed in artistic contexts too. We show that available artistic lenses can be simulated by our method. In this spirit, our work introduces an intuitive control over depth-of-field effects. The physical basis is crucial as a starting point to enable new artistic renderings based on a generalized focal surface to emphasize particular elements in the scene while retaining a realistic look. Our real-time solution provides realistic, as well as plausible expressive results.
IF=3.619, PCTL=98.387. ISSN: 0730-0301, ACM, USA

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH ASIA'09), 28(5), 134:1–6, 2009.
We present a GPU-based real-time rendering method that simulates a high-quality depth-of-field blur, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.
IF=3.383, PCTL=97.093. ISSN: 0730-0301, ACM, USA

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(3), 453–464, 2009.
This article presents a real-time GPU-based postfiltering method for rendering acceptable depth-of-field effects suited for virtual reality. Blurring is achieved by nonlinearly interpolating mipmap images generated from a pinhole image. Major artifacts common in the postfiltering techniques such as a bilinear magnification artifact, intensity leakage, and blurring discontinuity are practically eliminated via magnification with a circular filter, anisotropic mipmapping, and smoothing of blurring degrees. The whole framework is accelerated using GPU programs for constant and scalable real-time performance required for virtual reality. We also compare our method to recent GPU-based methods in terms of image quality and rendering performance.
IF=2.445, PCTL=91.279. ISSN: 1077-2626, IEEE Computer Society, USA


Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(1), 6–19, 2009.
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments (VEs). In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive VEs. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in VEs, without any hardware for head or eye tracking.
IF=2.445, PCTL=91.279. ISSN: 1077-2626, IEEE Computer Society, USA


Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Computer Graphics Forum, 27(7), 1955–1962, 2008.
Presented at Pacific Graphics 2008
We present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications.
IF=1.107, PCTL=58.929. ISSN: 0167-7055, Wiley Blackwell Publishing, England

This article reports two human experiments to investigate the effects of visual cues and sustained attention on spatial presence over a period of prolonged exposure in virtual environments. Inspired by the two functional subsystems subserving spatial and object vision in the human brain, visual cues and sustained attention were each classified into spatial and object cues, and spatial and non-spatial attention, respectively. In the first experiment, the effects of visual cues on spatial presence were examined when subjects were exposed to virtual environments configured with combinations of spatial and object cues. It was found that both types of visual cues enhanced spatial presence with saturation over a period of prolonged exposure, but the contribution of spatial cues became more relevant with longer exposure time. In the second experiment, subjects were asked to carry out two tasks involving sustained spatial attention and sustained non-spatial attention. We observed that spatially directed attention improved spatial presence more than non-spatially directed attention did. Furthermore, spatial attention had a positive interaction with detailed object cues.
IF=0.969, PCTL=59.729. ISSN: 0953-5438, Oxford Univ Press, Netherlands

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Proc. ACM VR Software and Tech., 29–38, 2007.
Invited to the TVCG's specical section on VRST'07 best papers
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.

Jane Hwang, J. Jung, S. Yim, J. Cheon, Sungkil Lee, S. Choi, and Gerard. J. Kim.
International Journal of Virtual Reality, 5(2), 59–66, 2006.
While hand-held computing devices are capable of rendering advanced 3D graphics and processing of multimedia data, they are not designed to provide and induce sufficient sense of immersion and presence for virtual reality. In this paper, we propose minimal requirements for realizing VR on a hand-held device. Furthermore, based on the proposed requirements, we have designed and implemented a low cost hand-held VR platform by adding multimodal sensors and display components to a hand-held PC. The platform enables a motion based interface, an essential part of realizing VR on a small hand-held device, and provides outputs in three modalities, visual, aural and tactile/haptic for a reasonable sensory experience. We showcase our platform and demonstrate the possibilities of hand-hand VR through three VR applications: a typical virtual walkthrough, a 3D multimedia contents browser, and a motion based racing game.
ISSN: 1081-1451, ,

Sungkil Lee, Gerard J. Kim, and Janghan Lee.
Proc. ACM VR Software and Tech., 73–80, 2004.
Presence is one of the goals of many virtual reality systems. Historically, in the context of virtual reality, the concept of presence has been associated much with spatial perception (bottom up process) as its informal definition of "feeling of being there" suggests. However, recent studies in presence have challenged this view and attempted to widen the concept to include psychological immersion, thus linking more high level elements (processed in a top down fashion) to presence such as story and plots, flow, attention and focus, identification with the characters, emotion, etc. In this paper, we experimentally studied the relationship between two content elements, each representing the two axis of the presence dichotomy, perceptual cues for spatial presence and sustained attention for (psychological) immersion. Our belief was that spatial perception or presence and a top down processed concept such as voluntary attention have only a very weak relationship, thus our experimental hypothesis was that sustained attention would positively affect spatial presence in a virtual environment with impoverished perceptual cues, but have no effect in an environment rich in them. In order to confirm the existence of the sustained attention in the experiment, fMRI of the subjects were taken and analyzed as well. The experimental results showed that that attention had no effect on spatial presence, even in the environment with impoverished spatial cues.

Sungkil Lee, Gerard J. Kim, Albert Rizzo, and Hyungjin Park.
Proc. 7th Annual International Workshop on Presence, 20–27, 2004.
Spatial presence, among the many aspects of presence, is the sense of physical and concrete space, often dubbed as the sense of "being there." This paper theorizes on how "spatial" presence is formed by various types of artificial cues in a virtual environment, form or content. We believe that spatial presence is a product of an unconscious effort to correctly register oneself into the virtual environment in a consistent manner. We hypothesize that this process is perceptual, and bottomup in nature, and rooted in the reflexive and adaptive behavior to react and resolve the mismatch in the spatial cues between the physical space where the user is and the virtual space where the user looks at, hears from and interacts with. Hinted from the fact that our brain has two major paths for processing sensory input, the "where" path for determining object locations, and "what" path for identifying objects, we categorize the sensory stimulation cues in the virtual environment accordingly and investigate in their relationships as how they affect the user in adaptively registering oneself into the virtual environment, thus creating spatial presence. Based on the results of series of our experiments and other bodies of research, we postulate that while low level and perceptual spatial cues are sufficient for creating spatial presence, they can be affected and modulated by the spatial (whether form or content) factors. These results provide important insights into constructing a model of spatial presence, its measurement, and guidelines for configuring locationbased virtual reality applications.




Conference Posters

Seung Youp Baek and Sungkil Lee
Pacific Graphics Posters, 47–48, 2020.
We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.
Dec. 10, 2020

Gi Beom Lee and Sungkil Lee
High Performance Graphics Posters, 2020.
This poster presents an iterative hierarchical Raster Culling Occlusion (ROC) technique that can scale up to massive scenes with higher geometry complexities. Unlike the original ROC, we do not handle individual objects, but use their hierarchical structures such as a Bounding Volume Hierarchy (BVH). The BVH is iteratively traversed from a moderate depth down to a deeper level (but not the bottom of the tree). The interior nodes are occlusion-tested in batch. The granularity for the culling is coarser, but the light-weight occlusion test with fewer draw calls leads to a great speedup in the overall rendering performance.
July 13, 2020

Sangmin Lee and Sungkil Lee
Eurographics Posters, 2016.
Lens flare, comprising diffraction patterns of direct lights and ghosts of an aperture, is one of artistic artifacts in optical systems. The generation of far-field diffraction patterns has commonly used Fourier transform of the iris apertures. While such outcomes are physically faithful, more flexible and intuitive editing of diffraction patterns has not been explored so far. In this poster, we present a novel scheme of diffraction synthesis, which additively integrates diffraction elements. We decompose the apertures into curved edges and circular core so that they abstract non-symmetric streaks and circular core highlights, respectively. We then apply Fourier transform for each, rotate them, and finally composite them into a single output image. In this way, we can easily generate diffraction patterns similarly to that of the source aperture and more exaggerated ones, as well.
May 9, 2016

Euijai Ahn, Sungkil Lee, and Gerard J. Kim
Proc. ACM VR Software and Tech. Posters, 199–199, 2015.
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.

Kihyuk Kim and Sungkil Lee
Eurovis Posters, 2015.
Volume editing with moving least squares is one of the effective schemes to achieve continuous and smooth deformation of existing volumes, but its interactive authoring has not been explored extensively. We present a framework for interactive editing of volume data with free-form deformation, which provides intuitive and interactive feedback on the fly. Given control points, we extend moving least squares with their visual metaphor to further encompass non-spatial attributes including lightness, density, and hues. Furthermore, a full GPU implementation of our framework achieves with instant real-time feedback with quick-and-easy volume editing metaphor.

Hyunjin Lee, Yuna Jeong, and Sungkil Lee
ACM SIGGRAPH ASIA Posters, 2013.
This paper presents a recursive tessellation scheme, which can represent virtually infinitesimal details beyond the typical limits of graphics hardware at run time further with multiple levels of displacement mapping.

Yuna Jeong, Kangtae Kim, and Sungkil Lee
ACM SIGGRAPH Posters, 2012.
Selected in the Semifinal list of SIGGRAPH Student Research Competition
This paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.




Patents

According to the present invention, a lens flare generation method and apparatus are provided that may simulate lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at remarkably high speed as compared with the conventional art. Further, according to an embodiment of the present invention, a non-linear effect may be added to a linear pattern-based lens flare effect, generating an actual lens flare reflecting most of physical characteristics generated from the lens system. Further, use of a pre-recorded non-linear pattern allows for generation of a lens flare having a similar quality to the existing light tracking-based simulation at higher speed as compared with the conventional art without speed reduction.

A lens flare generation method and apparatus simulates lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at high speed. A non-linear effect may be added to a linear pattern-based lens flare effect to generate an actual lens flare reflecting most of physical characteristics generated from the lens system. A pre-recorded non-linear pattern may be used.

Sungkil Lee and Younguk Kim
US Patent No.: 9,280,846, Mar 8, 2016.
A method for performing occlusion queries is disclosed. The method includes steps of: (a) a graphics processing unit (GPU) using a first depth buffer of a first frame to thereby predict a second depth buffer of a second frame; and (b) the GPU performing occlusion queries for the second frame by using the predicted second depth buffer, wherein the first frame is a frame predating the second frame. In accordance with the present invention, a configuration for classifying the objects into the occluders and the occludees is not required and the occlusion queries for the predicted second frame are acquired in advance at the last of the first frame or the first of the second frame.

Matthias Hullin, Sungkil Lee, Hans-Peter Seidel, and Elmar Eisemann.
Publication No.: WO2012/146303, Application No.: PCT/EP2011/056850, 2012.
A method and device for efficiently simulating lens flares produced by an optical system is provided. The method comprises the steps of - Simulating paths of rays from a light source through the optical system, the rays representing light; and Estimating, for points in a sensor plane, an irradiance, based on intersections of the simulated paths with the sensor plane.




Ph.D. Theses

This dissertation presents a GPU-based rendering algorithm for real-time defocus blur and bokeh effects, which significantly improve perceptual realism of synthetic images and can emphasize user’s attention. The defocus blur algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering. Also, the author presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. The model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. Further, the author shows a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods. Finally, the dissertation presents an efficient bokeh rendering technique that splats pre-computed sprites but takes dynamic visibilities and appearances into account at runtime. To achieve alias-free look without excessive sampling resulting from strong highlights, the author efficiently sample visibilities using rasterization from highlight sources. Our splatting uses a single precomputed 2D texture, which encodes radial aberrations against object depths. To further integrate dynamic appearances, the author also proposes an effective parameter sampling scheme for focal distance, radial distortion, optical vignetting, and spectral dispersion. The method allows us to render complex appearances of bokeh efficiently, which greatly improves the photorealism of defocus blur.

This dissertation presents a real-time perceptual rendering framework based on computational visual attention tracking in a virtual environment (VE). The visual attention tracking identifies the most plausibly attended objects using top-down (goal-driven) contexts inferred from a user’s navigation behaviors as well as a conventional bottom-up (feature-driven) saliency map. A human experiment was conducted to evaluate the prediction accuracy of the framework by comparing objects regarded as attended to with human gazes collected with an eye tracker. The experimental results indicate that the accuracy is in the level well supported by human cognition theories. The attention tracking framework, then, is applied to depth-of-field (DOF) rendering and level-of-detail (LOD) management, which are representative techniques to improve perceptual quality and rendering performance, respectively. Prior to applying the attention tracking to DOF rendering, we propose two GPU-based real-time DOF rendering methods, since there have been few methods plausible for interactive VEs. One method extends the previous mipmap-based approach, and the other, the previous layered and scatter approaches. Both DOF rendering methods achieve real-time performance without major artifacts present in previous methods. With the DOF rendering methods, we demonstrate attention-guided DOF rendering and LOD management, which use the depths and the levels of attention of attended objects as focal depths and fidelity levels, respectively. The attention-guided DOF rendering can simulate an interactive lens blur effect without an eye tracker, and the attention-guided LOD management can significantly improve rendering performance with little perceptual degradation.




27336, College of Software, Sungkyunkwan University, Tel. +82 31-299-4917, Seobu-ro 2066, Jangan-gu, Suwon, 16419, South Korea
Campus map (how to reach CGLab)