진학안내    Research    Publications    Funding    People    Course    Event
* Paper copies provided in this page are the authors' preprints. Personal/academic uses are only allowed.

Journal Articles

Sungkil Lee, Younguk Kim, and Elmar Eisemann
(in press) ACM Trans. Graphics, 0(0), 1–13, 2018.
ISSN: 0730-0301. IF=4.384. JCR=3/104. (Submitted) Sep. 21, 2017. (Accepted) Jul. 12, 2018.
AbstractThis article presents an iterative backward-warping technique and its applications. It predictively synthesizes depth buffers for novel views. Our solution is based on a fixed-point iteration that converges quickly in practice. Unlike the previous techniques, our solution is a pure backward warping without using bidirectional sources. To efficiently seed the iterative process, we also propose a tight bounding method for motion vectors. Non-convergent depth holes are inpainted via deep depth buffers. Our solution works well with arbitrarily distributed motion vectors under moderate motions. Many scenarios can benefit from our depth warping. As an application, we propose a highly scalable image-based occlusion-culling technique, achieving a significant speedup compared to the state of the art. We also demonstrate the benefit of our solution in multi-view soft-shadow generation.

Timothy R. Kol, Pablo Bauszat, Sungkil Lee, and Elmar Eisemann
(Accepted to) Computer Graphics Forum, 0(0), 1–13, 2018.
ISSN: 0167-7055. IF=2.046. JCR=22/104. Jul. 12, 2018.
AbstractWe present a scalable solution to render complex scenes from a large amount of view points. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many-light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.

Leonardo Scandolo, Sungkil Lee, and Elmar Eisemann
Computer Graphics Forum (Proc. EGSR 2018), 37(4), 167–176, 2018.
ISSN: 0167-7055. IF=2.046. JCR=22/104. Jul. 1, 2018.
AbstractFar-field diffraction can be evaluated using the Discrete Fourier Transform (DFT) in image space but it is costly due to its dense sampling. We propose a technique based on a closed-form solution of the continuous Fourier transform for simple vector primitives (quads) and propose a hierarchical and progressive evaluation to achieve real-time performance. Our method is able to simulate diffraction effects in optical systems and can handle varying visibility due to dynamic light sources. Furthermore, it seamlessly extends to near-field diffraction. We show the benefit of our solution in various applications, including realistic real-time glare and bloom rendering.

Martin Cadik, Daniel Sykora, and Sungkil Lee
Elsevier Computers & Graphics, 74, 109–118, 2018.
ISSN: 0097-8493. IF=1.2. JCR=56/104. August 1, 2018.
AbstractImage enhancement tasks can highly benefit from depth information, but the direct estimation of outdoor depth maps is difficult due to vast object distances. This paper presents a fully automatic framework for model-based synthesis of outdoor depth maps and its applications to image enhancements. We leverage 3D terrain models and camera pose estimation techniques to render approximate depth maps without resorting to manual alignment. Potential local misalignments, resulting from insufficient model details and rough registrations, are eliminated with our novel free-form warping. We first align synthetic depth edges with photo edges using the as-rigid-as-possible image registration and further refine the shape of the edges using the tight trimap-based alpha matting. The resulting synthetic depth maps are accurate, calibrated in the absolute distance. We demonstrate their benefit in image enhancement techniques including reblurring, depth-of-field simulation, haze removal, and guided texture synthesis.

Sun Geol Baek, Dong Hyun Kang, Sungkil Lee, Young Ik Eom
Journal of Systems and Software, 140, 17–31, 2018.
ISSN: 0164-1212. IF=2.278. JCR=19/104. Feb. 24, 2018.
AbstractAbnormal messages propagated from faulty operations in a vehicular system may severely harm the system, but they cannot be easily detected when their information is not known in advance. To support an efficient detection of faulty message patterns propagated in the in-vehicle network, this paper presents a novel graph pattern matching framework built upon a message log-driven graph modeling. Our framework models the unknown condition as a query graph and the reference database of normal operations as data graphs. The analysis of the faulty message propagation requires to consider the sequence of events in the distance measure, and thus, the conventional graph distance measures cannot be directly used for our purpose. We hence propose a novel distance metric based on the maximum common subgraph (MCS) between two graphs and the sequence numbers of messages, which works robustly even for the abnormal faulty patterns and can avoid false negatives in large databases. Since the problem of MCS computation is NP-hard, we also propose two efficient filtering techniques, one based on the lower bound of MCS distance for a polynomial-time approximation and the other based on edge pruning. Experiments performed on real and synthetic datasets to assess our framework show that ours significantly outperforms the previously existing methods in terms both of performance and accuracy of query responses.

Sunghun Jo, Yuna Jeong, and Sungkil Lee
Journal of Computer Science and Technology, 33(2), 417–428, 2018.
ISSN: 1000-9000. IF=0.956. JCR=76/106.
AbstractThis paper presents a scalable parser framework using graphics processing units (GPUs) for massive text-based files. Specifically, our solution is designed to efficiently parse Wavefront OBJ models texts of which specify 3D geometries and their topology. Our work bases its scalability and efficiency on chunk-based processing. The entire parsing problem is subdivided into subproblems the chunk of which can be processed independently and merged seamlessly. The within-chunk processing is made highly parallel, leveraged by GPUs. Our approach thereby overcomes the bottlenecks of the existing OBJ parsers. Experiments performed to assess the performance of our system showed that our solutions significantly outperform the existing CPU-based solutions and GPU-based solutions as well.

Jun Suk Kim, Sungkil Lee, Min Young Chung
Pervasive and Mobile Computing, 44, 45–57, 2018.
ISSN: 1574-1192. IF=2.974. JCR=33/148. Feb. 01, 2018.
AbstractCellular internet-of-things (CIoT) systems are recently developed by the third-generation partnership project (3GPP) to support internet-of-things (IoT) services over the conventional mobile-communication infrastructures. The CIoT systems allow a large number of IoT devices to be connected through the random-access procedure, but the concurrent accesses of the massive devices make this procedure heavily competitive. In this article, we present an effective time-division random-access scheme built upon the coverage levels (CLs), where each CIoT device is assigned a CL and categorized based on its radio-channel quality. In our scheme, the random-access loads of device groups having different CLs are distributed into different time periods, which greatly relaxes instantaneous contention and improves random-access performance. To assess the performance of our scheme, we also introduce a mathematical model that expresses and analyzes the states and behaviors of CIoT devices using the Markov chain. Mathematical analysis and simulation results show that our scheme significantly outperforms the conventional scheme (without time-division control) in terms of collision probability, succeeding access rate, and access-blocking probability.

Soonhyeon Kwon, Younguk Kim, Kihyuk Kim, and Sungkil Lee
Computer Animation and Virtual Worlds, 29(1), e1784:1–14, 2018.
ISSN: 1546-4261. IF=0.697. JCR=91/104. Feb. 6, 2018.
AbstractThis paper presents a novel heterogeneous volume deformation technique and an intuitive volume animation authoring framework. Our volume deformation extends the previous technique based on moving least squares with a density-aware weighting metric for data-driven importance control and efficient upsampling-based volume synthesis. For user interaction, we present an intuitive visual metaphor and interaction schemes to support effective spatiotemporal editing of volume deformation animation. Our framework is implemented fully on graphics processors and thus suitable for quick-and-easy prototyping of volume deformation with improved controllability.

Jun Suk Kim, Sungkil Lee, Min Young Chung
IEEE Trans. Vehicular Technology, 66(7), 6280–6290, 2017.
ISSN: 0018-9545. IF=2.243. JCR=14/82. July 1, 2017.
AbstractIn order to facilitate low-cost network connection of many devices, machine-type communication (MTC) has evolved to low-cost MTC (LC-MTC) in the third-generation partnership project (3GPP) standard. LC-MTC should be able to effectively handle intensive accesses through multiple narrow-band (NB) random-access channels (RACHs) assigned within the bandwidth of a long-term evolution (LTE) system. As the number of MTC devices and their congestion rapidly increase, the random-access scheme for LC-MTC RACH needs to be improved. This paper presents a novel random-access scheme that introduces virtual preambles of LC-MTC devices and associates them with RACH indices to effectively discern LC-MTC devices. In comparison to the sole use of preambles, our scheme allows an LC-MTC device to better choose a unique virtual preamble. Thereby, the probability of successful accesses of LC-MTC devices increases in contention-based random-access environments. We experimentally assessed our scheme and the results show that our scheme performs better than the existing preamble-based scheme in terms of collision probability, access delay, and access blocking probability.

Yunji Kang, Woohyun Joo, Sungkil Lee, Dongkun Shin
Journal of Systems Architecture, 76, 17–27, 2017.
ISSN: 1383-7621. IF=1.579. JCR=47/106. May 5, 2017.
AbstractMany visual tasks in modern personal devices such smartphones resort heavily to graphics processing units (GPUs) for their fluent user experiences. Because most GPUs for embedded systems are nonpreemptive by nature, it is important to schedule GPU resources efficiently across multiple GPU tasks. We present a novel spatial resource sharing (SRS) technique for GPU tasks, called a budget-reservation spatial resource sharing (BR-SRS) scheduling, which limits the number of GPU processing cores for a job based on the priority of the job. Such a priority-driven resource assignment can prevent a high-priority foreground GPU task from being delayed by background GPU tasks. The BR-SRS scheduler is invoked only twice at the arrival and completion of jobs, and thus, the scheduling overhead is minimized as well. We evaluated the performance of our scheduling scheme in an Android-based smartphone, and found that the proposed technique significantly improved the performance of high-priority tasks in comparison to the previous temporal budget-based multi-task scheduling.

Kihong Lee, DongWoo Lee, Sungkil Lee, and Young Ik Eom
Journal of Supercomputing, 73(4), 1307–1321, 2017.
ISSN: 0920-8542. IF=1.088. JCR=47/105. March. 23, 2017.
AbstractA virtualized system generally suffers from low I/O performance, mainly caused by its inherent abstraction overhead and frequent CPU transitions between the guest and hypervisor modes. The recent research of polling-based I/O virtualization partly solved the problem, but excessive polling trades intensive CPU usage for higher performance. This article presents a power-efficient and high-performance block I/O framework for a virtual machine, which allows us to use it even with a limited number of CPU cores in mobile or embedded systems. Our framework monitors system status, and dynamically switches the I/O process mode between the exit and polling modes, depending on the amounts of current I/O requests and CPU utilization. It also dynamically controls the polling interval to reduce redundant polling. The highly dynamic nature of our framework leads to improvements in I/O performance with lower CPU usage as well. Our experiments showed that our framework outperformed the existing exit-based mechanisms by 10.8 % higher I/O throughput, maintaining similar CPU usage by only 3.1 % increment. In comparison to the systems solely based on the polling mechanism, ours reduced the CPU usage roughly down to 10.0 % with no or negligible performance loss.

Hyuntae Joo, Soonhyeon Kwon, Sangmin Lee, Elmar Eisemann, and Sungkil Lee
Computer Graphics Forum (Proc. EGSR'16), 35(4), 99–105, 2016.
ISSN: 0167-7055. IF=2.046. JCR=22/104. Jun. 22, 2016.
AbstractWe present an efficient ray-tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed-form solution of ray-surface intersections. We propose a numerical root-finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture-based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well-suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus-blur and lens-flare effects.

Yuna Jeong, Sangmin Lee, Soonhyeon Kwon, and Sungkil Lee
The Visual Computer (Proc. CGI'16), 32(6), 1025–1034, 2016.
ISSN: 0178-2789. IF=1.036. JCR=67/104. Jun., 2016.
AbstractThis article presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. Our model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. We further propose a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods.

Yuna Jeong, Hyuntae Joo, Gyeonghwan Hong, Dongkun Shin, and Sungkil Lee
IEEE Trans. Consumer Electronics, 61(3), 295–301, 2015.
ISSN: 0098-3063. IF=1.120. JCR=40/82. Aug., 2015.
AbstractInternet of things recently emerges as a common platform and service for consumer electronics. This paper presents an interactive framework of visualizing and authoring IoT in indoor environments such as home or small office. Building blocks of the framework are virtual sensors and actuators that abstract physical things and their virtual behaviors on top of their physical networks. Their behaviors are abstracted and programmed through visual authoring tools on the web, which allows a casual consumer to easily monitor and define their behaviors even without knowing the underlying physical connections. The user study performed to assess the usability of the visual authoring showed that the visual authoring is easy to use, understandable, and also preferred to typical text-based script programming

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
IEEE Trans. Haptics, 7(3), 394–404, 2014.
ISSN: 1939-1412. IF=1.031. JCR=13/22. Sep. 17, 2014.
AbstractTactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.

Sungkil Lee, Mike Sips, and Hans-Peter Seidel
IEEE Trans. Vis. and Computer Graphics, 19(10), 1746–1757, 2013.
* Invited to and presented at IEEE InfoVIS 2013, Atlanta, GA.
ISSN: 1077-2626. IF=1.400. JCR=25/106. Oct., 2013.

AbstractVisualization techniques often use color to present categorical differences to a user. When selecting a color palette, the perceptual qualities of color need careful consideration. Large coherent groups visually suppress smaller groups, and are often visually dominant in images. This article introduces the concept of class visibility used to quantitatively measure the utility of a color palette to present coherent categorical structure to the user. We present a color optimization algorithm based on our class visibility metric to make categorical differences clearly visible to the user. We performed two user experiments on user preference and visual search to validate our visibility measure over a range of color palettes. The results indicate that visibility is a robust measure, and our color optimization can increase the effectiveness of categorical data visualizations.

Yuna Jeong, Kangtae Kim, and Sungkil Lee
Computer Graphics Forum, 32(6), 126–134, 2013.
* Invited to and presented at Pacific Graphics 2014, Seoul, Korea.
ISSN: 0167-7055. IF=2.046. JCR=22/104. Sep. 13, 2013.

AbstractThis paper presents a GPU-based rendering algorithm for real-time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering.

Sungkil Lee and Elmar Eisemann
Computer Graphics Forum (Proc. EGSR'13), 32(4), 1–6, 2013.
ISSN: 0167-7055. IF=2.046. JCR=22/104. Jul. 18, 2013.
AbstractWe present a practical real-time approach for rendering lens-flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first-order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens-flare-producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically-plausible images at high framerates on standard off-the-shelf graphics hardware.

Matthias Hullin, Elmar Eisemann, Hans-Peter Seidel, and Sungkil Lee.
ACM Trans. Graphics (Proc. SIGGRAPH'11), 30(4), 108:1–9, 2011.
ISSN: 0730-0301. IF=4.384. JCR=3/104. Jul. 25, 2011.
AbstractLens flare is caused by light passing through a photographic lens system in an unintended way. Often considered a degrading artifact, it has become a crucial component for realistic imagery and an artistic means that can even lead to an increased perceived brightness. So far, only costly offline processes allowed for convincing simulations of the complex light interactions. In this paper, we present a novel method to interactively compute physically-plausible flare renderings for photographic lenses. The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings. Various acceleration strategies allow for a performance/quality tradeoff, making our technique applicable both in real-time applications and in high-quality production rendering. We further outline artistic extensions to our system.

Sunghoon Yim, Sungkil Lee, and Seungmoon Choi
Interacting with Computers, 23(3), 268–278, 2011.
ISSN: 0953-5438. IF=0.889. JCR=15/22.
AbstractThis article evaluates the usability of motion sensing-based interaction on a mobile platform using image browsing as a representative task. Three types of interfaces, a physical button interface, a motion-sensing interface using a high-precision commercial 3D motion tracker, and a motion-sensing interface using an in-house low-cost 3D motion tracker, are compared in terms of task performance and subjective preference. Participants were provided with prolonged training over 20 days, in order to compensate for the participants’ unfamiliarity with the motion-sensing interfaces. Experimental results showed that the participants’ task performance and subjective preference for the two motion-sensing interfaces were initially low, but they rapidly improved with training and soon approached the level of the button interface. Furthermore, a recall test, which was conducted 4 weeks later, demonstrated that the usability gains were well retained in spite of the long time gap between uses. Overall, these findings highlight the potential of motion-based interaction as an intuitive interface for mobile devices.

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH'10), 29(4), 65:1–7, 2010.
ISSN: 0730-0301. IF=4.384. JCR=3/104.
AbstractWe present a novel rendering system for defocus-blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more general and can integrate advanced simulations, such as simple geometric lens models enabling various lens aberration effects. These latter are crucial for realism, but are often employed in artistic contexts too. We show that available artistic lenses can be simulated by our method. In this spirit, our work introduces an intuitive control over depth-of-field effects. The physical basis is crucial as a starting point to enable new artistic renderings based on a generalized focal surface to emphasize particular elements in the scene while retaining a realistic look. Our real-time solution provides realistic, as well as plausible expressive results.

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH ASIA'09), 28(5), 134:1–6, 2009.
ISSN: 0730-0301. IF=4.384. JCR=3/104.
AbstractWe present a GPU-based real-time rendering method that simulates a high-quality depth-of-field blur, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(3), 453–464, 2009.
ISSN: 1077-2626. IF=1.400. JCR=25/106.
AbstractThis article presents a real-time GPU-based postfiltering method for rendering acceptable depth-of-field effects suited for virtual reality. Blurring is achieved by nonlinearly interpolating mipmap images generated from a pinhole image. Major artifacts common in the postfiltering techniques such as a bilinear magnification artifact, intensity leakage, and blurring discontinuity are practically eliminated via magnification with a circular filter, anisotropic mipmapping, and smoothing of blurring degrees. The whole framework is accelerated using GPU programs for constant and scalable real-time performance required for virtual reality. We also compare our method to recent GPU-based methods in terms of image quality and rendering performance.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(1), 6–19, 2009.
ISSN: 1077-2626. IF=1.400. JCR=25/106.
AbstractThis paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments (VEs). In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive VEs. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in VEs, without any hardware for head or eye tracking.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Computer Graphics Forum (Proc. Pacific Graphics'08), 27(7), 1955–1962, 2008.
ISSN: 0167-7055. IF=2.046. JCR=22/104.
AbstractWe present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications.

Sungkil Lee and Gerard J. Kim.
Interacting with Computers, 20(4–5), 491–502, 2008.
ISSN: 0953-5438. IF=0.889. JCR=15/22.
AbstractThis article reports two human experiments to investigate the effects of visual cues and sustained attention on spatial presence over a period of prolonged exposure in virtual environments. Inspired by the two functional subsystems subserving spatial and object vision in the human brain, visual cues and sustained attention were each classified into spatial and object cues, and spatial and non-spatial attention, respectively. In the first experiment, the effects of visual cues on spatial presence were examined when subjects were exposed to virtual environments configured with combinations of spatial and object cues. It was found that both types of visual cues enhanced spatial presence with saturation over a period of prolonged exposure, but the contribution of spatial cues became more relevant with longer exposure time. In the second experiment, subjects were asked to carry out two tasks involving sustained spatial attention and sustained non-spatial attention. We observed that spatially directed attention improved spatial presence more than non-spatially directed attention did. Furthermore, spatial attention had a positive interaction with detailed object cues.

Jane Hwang, J. Jung, S. Yim, J. Cheon, Sungkil Lee, S. Choi, and Gerard. J. Kim.
International Journal of Virtual Reality, 5(2), 59–66, 2006.
ISSN: 1081-1451.
AbstractWhile hand-held computing devices are capable of rendering advanced 3D graphics and processing of multimedia data, they are not designed to provide and induce sufficient sense of immersion and presence for virtual reality. In this paper, we propose minimal requirements for realizing VR on a hand-held device. Furthermore, based on the proposed requirements, we have designed and implemented a low cost hand-held VR platform by adding multimodal sensors and display components to a hand-held PC. The platform enables a motion based interface, an essential part of realizing VR on a small hand-held device, and provides outputs in three modalities, visual, aural and tactile/haptic for a reasonable sensory experience. We showcase our platform and demonstrate the possibilities of hand-hand VR through three VR applications: a typical virtual walkthrough, a 3D multimedia contents browser, and a motion based racing game.

Conference Papers/Posters

Sangmin Lee and Sungkil Lee
Eurographics Posters, 2016.
May 9, 2016.
AbstractLens flare, comprising diffraction patterns of direct lights and ghosts of an aperture, is one of artistic artifacts in optical systems. The generation of far-field diffraction patterns has commonly used Fourier transform of the iris apertures. While such outcomes are physically faithful, more flexible and intuitive editing of diffraction patterns has not been explored so far. In this poster, we present a novel scheme of diffraction synthesis, which additively integrates diffraction elements. We decompose the apertures into curved edges and circular core so that they abstract non-symmetric streaks and circular core highlights, respectively. We then apply Fourier transform for each, rotate them, and finally composite them into a single output image. In this way, we can easily generate diffraction patterns similarly to that of the source aperture and more exaggerated ones, as well.

Kihyuk Kim and Sungkil Lee
Eurovis Posters, 2015.
AbstractVolume editing with moving least squares is one of the effective schemes to achieve continuous and smooth deformation of existing volumes, but its interactive authoring has not been explored extensively. We present a framework for interactive editing of volume data with free-form deformation, which provides intuitive and interactive feedback on the fly. Given control points, we extend moving least squares with their visual metaphor to further encompass non-spatial attributes including lightness, density, and hues. Furthermore, a full GPU implementation of our framework achieves with instant real-time feedback with quick-and-easy volume editing metaphor.

Hyunjin Lee, Yuna Jeong, and Sungkil Lee
ACM SIGGRAPH ASIA Posters, 2013.
AbstractThis paper presents a recursive tessellation scheme, which can represent virtually infinitesimal details beyond the typical limits of graphics hardware at run time further with multiple levels of displacement mapping.

Yuna Jeong, Kangtae Kim, and Sungkil Lee
ACM SIGGRAPH Posters, 2012.
* Selected in the Semifinal list of SIGGRAPH Student Research Competition
AbstractThis paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
Proc. Eurohaptics, 258–269, 2012.
June 13, 2012.
AbstractThis paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Proc. ACM VR Software and Tech., 29–38, 2007.
Invited to the TVCG's specical section on VRST'07 best papers.
AbstractThis paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.

Sungkil Lee, Gerard J. Kim, and Janghan Lee.
Proc. ACM VR Software and Tech., 73–80, 2004.
AbstractPresence is one of the goals of many virtual reality systems. Historically, in the context of virtual reality, the concept of presence has been associated much with spatial perception (bottom up process) as its informal definition of "feeling of being there" suggests. However, recent studies in presence have challenged this view and attempted to widen the concept to include psychological immersion, thus linking more high level elements (processed in a top down fashion) to presence such as story and plots, flow, attention and focus, identification with the characters, emotion, etc. In this paper, we experimentally studied the relationship between two content elements, each representing the two axis of the presence dichotomy, perceptual cues for spatial presence and sustained attention for (psychological) immersion. Our belief was that spatial perception or presence and a top down processed concept such as voluntary attention have only a very weak relationship, thus our experimental hypothesis was that sustained attention would positively affect spatial presence in a virtual environment with impoverished perceptual cues, but have no effect in an environment rich in them. In order to confirm the existence of the sustained attention in the experiment, fMRI of the subjects were taken and analyzed as well. The experimental results showed that that attention had no effect on spatial presence, even in the environment with impoverished spatial cues.

Sungkil Lee, Gerard J. Kim, Albert Rizzo, and Hyungjin Park.
Proc. 7th Annual International Workshop on Presence, 20–27, 2004.
AbstractSpatial presence, among the many aspects of presence, is the sense of physical and concrete space, often dubbed as the sense of "being there." This paper theorizes on how "spatial" presence is formed by various types of artificial cues in a virtual environment, form or content. We believe that spatial presence is a product of an unconscious effort to correctly register oneself into the virtual environment in a consistent manner. We hypothesize that this process is perceptual, and bottomup in nature, and rooted in the reflexive and adaptive behavior to react and resolve the mismatch in the spatial cues between the physical space where the user is and the virtual space where the user looks at, hears from and interacts with. Hinted from the fact that our brain has two major paths for processing sensory input, the "where" path for determining object locations, and "what" path for identifying objects, we categorize the sensory stimulation cues in the virtual environment accordingly and investigate in their relationships as how they affect the user in adaptively registering oneself into the virtual environment, thus creating spatial presence. Based on the results of series of our experiments and other bodies of research, we postulate that while low level and perceptual spatial cues are sufficient for creating spatial presence, they can be affected and modulated by the spatial (whether form or content) factors. These results provide important insights into constructing a model of spatial presence, its measurement, and guidelines for configuring locationbased virtual reality applications.


AbstractA lens flare generation method and apparatus simulates lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at high speed. A non-linear effect may be added to a linear pattern-based lens flare effect to generate an actual lens flare reflecting most of physical characteristics generated from the lens system. A pre-recorded non-linear pattern may be used.

AbstractAccording to the present invention, a lens flare generation method and apparatus are provided that may simulate lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at remarkably high speed as compared with the conventional art. Further, according to an embodiment of the present invention, a non-linear effect may be added to a linear pattern-based lens flare effect, generating an actual lens flare reflecting most of physical characteristics generated from the lens system. Further, use of a pre-recorded non-linear pattern allows for generation of a lens flare having a similar quality to the existing light tracking-based simulation at higher speed as compared with the conventional art without speed reduction.

AbstractA method for performing occlusion queries is disclosed. The method includes steps of: (a) a graphics processing unit (GPU) using a first depth buffer of a first frame to thereby predict a second depth buffer of a second frame; and (b) the GPU performing occlusion queries for the second frame by using the predicted second depth buffer, wherein the first frame is a frame predating the second frame. In accordance with the present invention, a configuration for classifying the objects into the occluders and the occludees is not required and the occlusion queries for the predicted second frame are acquired in advance at the last of the first frame or the first of the second frame.

Matthias Hullin, Sungkil Lee, Hans-Peter Seidel, and Elmar Eisemann.
Publication No.: WO2012/146303, Application No.: PCT/EP2011/056850, 2012.
AbstractA method and device for efficiently simulating lens flares produced by an optical system is provided. The method comprises the steps of - Simulating paths of rays from a light source through the optical system, the rays representing light; and Estimating, for points in a sensor plane, an irradiance, based on intersections of the simulated paths with the sensor plane.

27336, College of Software, Sungkyunkwan University, Tel. +82 31-299-4917, Seobu-ro 2066, Jangan-gu, Suwon, 16419, South Korea
Campus map (how to reach CGLab)