진학안내    Research    Publications    Funding    People    Gallery    Courses    Events
  Pub.Korean
* Papers provided in this page are the authors' preprints. Personal/academic uses are only allowed.

Journal Articles

Soonhyeon Kwon, Younguk Kim, Kihyuk Kim, and Sungkil Lee
Computer Animation and Virtual Worlds, 2017.
June 2, 2017.
AbstractThis paper presents a novel heterogeneous volume deformation technique and an intuitive volume animation authoring framework. Our volume deformation extends the previous technique based on moving least squares with a density-aware weighting metric for data-driven importance control and efficient upsampling-based volume synthesis. For user interaction, we present an intuitive visual metaphor and interaction schemes to support effective spatiotemporal editing of volume deformation animation. Our framework is implemented fully on graphics processors and thus suitable for quick-and-easy prototyping of volume deformation with improved controllability.

Yunji Kang, Woohyun Joo, Sungkil Lee, Dongkun Shin
Journal of Systems Architecture, 76, 17–27, 2017.
May 5, 2017.
AbstractMany visual tasks in modern personal devices such smartphones resort heavily to graphics processing units (GPUs) for their fluent user experiences. Because most GPUs for embedded systems are nonpreemptive by nature, it is important to schedule GPU resources efficiently across multiple GPU tasks. We present a novel spatial resource sharing (SRS) technique for GPU tasks, called a budget-reservation spatial resource sharing (BR-SRS) scheduling, which limits the number of GPU processing cores for a job based on the priority of the job. Such a priority-driven resource assignment can prevent a high-priority foreground GPU task from being delayed by background GPU tasks. The BR-SRS scheduler is invoked only twice at the arrival and completion of jobs, and thus, the scheduling overhead is minimized as well. We evaluated the performance of our scheduling scheme in an Android-based smartphone, and found that the proposed technique significantly improved the performance of high-priority tasks in comparison to the previous temporal budget-based multi-task scheduling.

Jun Suk Kim, Sungkil Lee, Min Young Chung
IEEE Trans. Vehicular Technology (in press), 2016.
ISSN: 0018-9545. IF=2.243. JCR=14/82. Dec. 20, 2016.
AbstractIn order to facilitate low-cost network connection of many devices, machine-type communication (MTC) has evolved to low-cost MTC (LC-MTC) in the third-generation partnership project (3GPP) standard. LC-MTC should be able to effectively handle intensive accesses through multiple narrow-band (NB) random-access channels (RACHs) assigned within the bandwidth of a long-term evolution (LTE) system. As the number of MTC devices and their congestion rapidly increase, the random-access scheme for LC-MTC RACH needs to be improved. This paper presents a novel random-access scheme that introduces virtual preambles of LC-MTC devices and associates them with RACH indices to effectively discern LC-MTC devices. In comparison to the sole use of preambles, our scheme allows an LC-MTC device to better choose a unique virtual preamble. Thereby, the probability of successful accesses of LC-MTC devices increases in contention-based random-access environments. We experimentally assessed our scheme and the results show that our scheme performs better than the existing preamble-based scheme in terms of collision probability, access delay, and access blocking probability.

Kihong Lee, DongWoo Lee, Sungkil Lee, and Young Ik Eom
Journal of Supercomputing, 73(4), 1307–1321, 2017.
ISSN: 0920-8542. IF=1.088. JCR=47/105. March. 23, 2017.
AbstractA virtualized system generally suffers from low I/O performance, mainly caused by its inherent abstraction overhead and frequent CPU transitions between the guest and hypervisor modes. The recent research of polling-based I/O virtualization partly solved the problem, but excessive polling trades intensive CPU usage for higher performance. This article presents a power-efficient and high-performance block I/O framework for a virtual machine, which allows us to use it even with a limited number of CPU cores in mobile or embedded systems. Our framework monitors system status, and dynamically switches the I/O process mode between the exit and polling modes, depending on the amounts of current I/O requests and CPU utilization. It also dynamically controls the polling interval to reduce redundant polling. The highly dynamic nature of our framework leads to improvements in I/O performance with lower CPU usage as well. Our experiments showed that our framework outperformed the existing exit-based mechanisms by 10.8 % higher I/O throughput, maintaining similar CPU usage by only 3.1 % increment. In comparison to the systems solely based on the polling mechanism, ours reduced the CPU usage roughly down to 10.0 % with no or negligible performance loss.

Hyuntae Joo, Soonhyeon Kwon, Sangmin Lee, Elmar Eisemann, and Sungkil Lee
Computer Graphics Forum (Proc. EGSR'16), 35(4), 99–105, 2016.
ISSN: 0167-7055. IF=1.542. JCR=17/106. Jun. 22, 2016.
AbstractWe present an efficient ray-tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed-form solution of ray-surface intersections. We propose a numerical root-finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture-based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well-suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus-blur and lens-flare effects.

Yuna Jeong, Sangmin Lee, Soonhyeon Kwon, and Sungkil Lee
The Visual Computer (Proc. Computer Graphics International'16), 32(6), 1025–1034, 2016.
ISSN: 0178-2789. IF=1.060. JCR=50/106. Jun., 2016.
AbstractThis article presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. Our model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. We further propose a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods.

Yuna Jeong, Hyuntae Joo, Gyeonghwan Hong, Dongkun Shin, and Sungkil Lee
IEEE Trans. Consumer Electronics, 61(3), 295–301, 2015.
ISSN: 0098-3063. IF=1.120. JCR=40/82. Aug., 2015.
AbstractInternet of things recently emerges as a common platform and service for consumer electronics. This paper presents an interactive framework of visualizing and authoring IoT in indoor environments such as home or small office. Building blocks of the framework are virtual sensors and actuators that abstract physical things and their virtual behaviors on top of their physical networks. Their behaviors are abstracted and programmed through visual authoring tools on the web, which allows a casual consumer to easily monitor and define their behaviors even without knowing the underlying physical connections. The user study performed to assess the usability of the visual authoring showed that the visual authoring is easy to use, understandable, and also preferred to typical text-based script programming

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
IEEE Trans. Haptics, 7(3), 394–404, 2014.
ISSN: 1939-1412. IF=1.031. JCR=13/22. Sep. 17, 2014.
AbstractTactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.

Sungkil Lee, Mike Sips, and Hans-Peter Seidel
IEEE Trans. Vis. and Computer Graphics, 19(10), 1746–1757, 2013.
* Invited to and presented at IEEE InfoVIS 2013, Atlanta, GA.
ISSN: 1077-2626. IF=1.400. JCR=25/106. Oct., 2013.

AbstractVisualization techniques often use color to present categorical differences to a user. When selecting a color palette, the perceptual qualities of color need careful consideration. Large coherent groups visually suppress smaller groups, and are often visually dominant in images. This article introduces the concept of class visibility used to quantitatively measure the utility of a color palette to present coherent categorical structure to the user. We present a color optimization algorithm based on our class visibility metric to make categorical differences clearly visible to the user. We performed two user experiments on user preference and visual search to validate our visibility measure over a range of color palettes. The results indicate that visibility is a robust measure, and our color optimization can increase the effectiveness of categorical data visualizations.

Yuna Jeong, Kangtae Kim, and Sungkil Lee
Computer Graphics Forum, 32(6), 126–134, 2013.
* Invited to and presented at Pacific Graphics 2014, Seoul, Korea.
ISSN: 0167-7055. IF=1.542. JCR=17/106. Sep. 13, 2013.

AbstractThis paper presents a GPU-based rendering algorithm for real-time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering.

Sungkil Lee and Elmar Eisemann
Computer Graphics Forum (Proc. EGSR'13), 32(4), 1–6, 2013.
ISSN: 0167-7055. IF=1.542. JCR=17/106. Jul. 18, 2013.
AbstractWe present a practical real-time approach for rendering lens-flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first-order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens-flare-producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically-plausible images at high framerates on standard off-the-shelf graphics hardware.

Matthias Hullin, Elmar Eisemann, Hans-Peter Seidel, and Sungkil Lee.
ACM Trans. Graphics (Proc. SIGGRAPH'11), 30(4), 108:1–9, 2011.
ISSN: 0730-0301. IF=4.218. JCR=1/106. Jul. 25, 2011.
AbstractLens flare is caused by light passing through a photographic lens system in an unintended way. Often considered a degrading artifact, it has become a crucial component for realistic imagery and an artistic means that can even lead to an increased perceived brightness. So far, only costly offline processes allowed for convincing simulations of the complex light interactions. In this paper, we present a novel method to interactively compute physically-plausible flare renderings for photographic lenses. The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings. Various acceleration strategies allow for a performance/quality tradeoff, making our technique applicable both in real-time applications and in high-quality production rendering. We further outline artistic extensions to our system.

Sunghoon Yim, Sungkil Lee, and Seungmoon Choi
Interacting with Computers, 23(3), 268–278, 2011.
ISSN: 0953-5438. IF=0.889. JCR=15/22.
AbstractThis article evaluates the usability of motion sensing-based interaction on a mobile platform using image browsing as a representative task. Three types of interfaces, a physical button interface, a motion-sensing interface using a high-precision commercial 3D motion tracker, and a motion-sensing interface using an in-house low-cost 3D motion tracker, are compared in terms of task performance and subjective preference. Participants were provided with prolonged training over 20 days, in order to compensate for the participants’ unfamiliarity with the motion-sensing interfaces. Experimental results showed that the participants’ task performance and subjective preference for the two motion-sensing interfaces were initially low, but they rapidly improved with training and soon approached the level of the button interface. Furthermore, a recall test, which was conducted 4 weeks later, demonstrated that the usability gains were well retained in spite of the long time gap between uses. Overall, these findings highlight the potential of motion-based interaction as an intuitive interface for mobile devices.

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH'10), 29(4), 65:1–7, 2010.
ISSN: 0730-0301. IF=4.218. JCR=1/106.
AbstractWe present a novel rendering system for defocus-blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more general and can integrate advanced simulations, such as simple geometric lens models enabling various lens aberration effects. These latter are crucial for realism, but are often employed in artistic contexts too. We show that available artistic lenses can be simulated by our method. In this spirit, our work introduces an intuitive control over depth-of-field effects. The physical basis is crucial as a starting point to enable new artistic renderings based on a generalized focal surface to emphasize particular elements in the scene while retaining a realistic look. Our real-time solution provides realistic, as well as plausible expressive results.

Sungkil Lee, Elmar Eisemann, and Hans-Peter Seidel.
ACM Trans. Graphics (Proc. SIGGRAPH ASIA'09), 28(5), 134:1–6, 2009.
ISSN: 0730-0301. IF=4.218. JCR=1/106.
AbstractWe present a GPU-based real-time rendering method that simulates a high-quality depth-of-field blur, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(3), 453–464, 2009.
ISSN: 1077-2626. IF=1.400. JCR=25/106.
AbstractThis article presents a real-time GPU-based postfiltering method for rendering acceptable depth-of-field effects suited for virtual reality. Blurring is achieved by nonlinearly interpolating mipmap images generated from a pinhole image. Major artifacts common in the postfiltering techniques such as a bilinear magnification artifact, intensity leakage, and blurring discontinuity are practically eliminated via magnification with a circular filter, anisotropic mipmapping, and smoothing of blurring degrees. The whole framework is accelerated using GPU programs for constant and scalable real-time performance required for virtual reality. We also compare our method to recent GPU-based methods in terms of image quality and rendering performance.


Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
IEEE Trans. Vis. and Computer Graphics, 15(1), 6–19, 2009.
ISSN: 1077-2626. IF=1.400. JCR=25/106.
AbstractThis paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments (VEs). In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive VEs. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in VEs, without any hardware for head or eye tracking.


Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Computer Graphics Forum (Proc. Pacific Graphics'08), 27(7), 1955–1962, 2008.
ISSN: 0167-7055. IF=1.542. JCR=17/106.
AbstractWe present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications.

Sungkil Lee and Gerard J. Kim.
Interacting with Computers, 20(4–5), 491–502, 2008.
ISSN: 0953-5438. IF=0.889. JCR=15/22.
AbstractThis article reports two human experiments to investigate the effects of visual cues and sustained attention on spatial presence over a period of prolonged exposure in virtual environments. Inspired by the two functional subsystems subserving spatial and object vision in the human brain, visual cues and sustained attention were each classified into spatial and object cues, and spatial and non-spatial attention, respectively. In the first experiment, the effects of visual cues on spatial presence were examined when subjects were exposed to virtual environments configured with combinations of spatial and object cues. It was found that both types of visual cues enhanced spatial presence with saturation over a period of prolonged exposure, but the contribution of spatial cues became more relevant with longer exposure time. In the second experiment, subjects were asked to carry out two tasks involving sustained spatial attention and sustained non-spatial attention. We observed that spatially directed attention improved spatial presence more than non-spatially directed attention did. Furthermore, spatial attention had a positive interaction with detailed object cues.

Jane Hwang, J. Jung, S. Yim, J. Cheon, Sungkil Lee, S. Choi, and Gerard. J. Kim.
International Journal of Virtual Reality, 5(2), 59–66, 2006.
AbstractWhile hand-held computing devices are capable of rendering advanced 3D graphics and processing of multimedia data, they are not designed to provide and induce sufficient sense of immersion and presence for virtual reality. In this paper, we propose minimal requirements for realizing VR on a hand-held device. Furthermore, based on the proposed requirements, we have designed and implemented a low cost hand-held VR platform by adding multimodal sensors and display components to a hand-held PC. The platform enables a motion based interface, an essential part of realizing VR on a small hand-held device, and provides outputs in three modalities, visual, aural and tactile/haptic for a reasonable sensory experience. We showcase our platform and demonstrate the possibilities of hand-hand VR through three VR applications: a typical virtual walkthrough, a 3D multimedia contents browser, and a motion based racing game.




Conference Papers/Posters

Sangmin Lee and Sungkil Lee
Eurographics Posters, 2016.
May 9, 2016.
AbstractLens flare, comprising diffraction patterns of direct lights and ghosts of an aperture, is one of artistic artifacts in optical systems. The generation of far-field diffraction patterns has commonly used Fourier transform of the iris apertures. While such outcomes are physically faithful, more flexible and intuitive editing of diffraction patterns has not been explored so far. In this poster, we present a novel scheme of diffraction synthesis, which additively integrates diffraction elements. We decompose the apertures into curved edges and circular core so that they abstract non-symmetric streaks and circular core highlights, respectively. We then apply Fourier transform for each, rotate them, and finally composite them into a single output image. In this way, we can easily generate diffraction patterns similarly to that of the source aperture and more exaggerated ones, as well.

Kihyuk Kim and Sungkil Lee
Eurovis Posters, 2015.
AbstractVolume editing with moving least squares is one of the effective schemes to achieve continuous and smooth deformation of existing volumes, but its interactive authoring has not been explored extensively. We present a framework for interactive editing of volume data with free-form deformation, which provides intuitive and interactive feedback on the fly. Given control points, we extend moving least squares with their visual metaphor to further encompass non-spatial attributes including lightness, density, and hues. Furthermore, a full GPU implementation of our framework achieves with instant real-time feedback with quick-and-easy volume editing metaphor.

Hyunjin Lee, Yuna Jeong, and Sungkil Lee
ACM SIGGRAPH ASIA Posters, 2013.
AbstractThis paper presents a recursive tessellation scheme, which can represent virtually infinitesimal details beyond the typical limits of graphics hardware at run time further with multiple levels of displacement mapping.

Yuna Jeong, Kangtae Kim, and Sungkil Lee
ACM SIGGRAPH Posters, 2012.
* Selected in the Semifinal list of SIGGRAPH Student Research Competition
AbstractThis paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.

Myongchan Kim, Sungkil Lee, and Seungmoon Choi
Proc. Eurohaptics (LNCS), 7282(2012), 258–269, 2012.
ISSN: 0302-9743. June 13, 2012.
AbstractThis paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.

Sungkil Lee, Gerard J. Kim, and Seungmoon Choi
Proc. ACM VR Software and Tech., 29–38, 2007.
Invited to the TVCG's specical section on VRST'07 best papers.
AbstractThis paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.

Sungkil Lee, Gerard J. Kim, and Janghan Lee.
Proc. ACM VR Software and Tech., 73–80, 2004.
AbstractPresence is one of the goals of many virtual reality systems. Historically, in the context of virtual reality, the concept of presence has been associated much with spatial perception (bottom up process) as its informal definition of "feeling of being there" suggests. However, recent studies in presence have challenged this view and attempted to widen the concept to include psychological immersion, thus linking more high level elements (processed in a top down fashion) to presence such as story and plots, flow, attention and focus, identification with the characters, emotion, etc. In this paper, we experimentally studied the relationship between two content elements, each representing the two axis of the presence dichotomy, perceptual cues for spatial presence and sustained attention for (psychological) immersion. Our belief was that spatial perception or presence and a top down processed concept such as voluntary attention have only a very weak relationship, thus our experimental hypothesis was that sustained attention would positively affect spatial presence in a virtual environment with impoverished perceptual cues, but have no effect in an environment rich in them. In order to confirm the existence of the sustained attention in the experiment, fMRI of the subjects were taken and analyzed as well. The experimental results showed that that attention had no effect on spatial presence, even in the environment with impoverished spatial cues.

Sungkil Lee, Gerard J. Kim, Albert Rizzo, and Hyungjin Park.
Proc. 7th Annual International Workshop on Presence, 20–27, 2004.
AbstractSpatial presence, among the many aspects of presence, is the sense of physical and concrete space, often dubbed as the sense of "being there." This paper theorizes on how "spatial" presence is formed by various types of artificial cues in a virtual environment, form or content. We believe that spatial presence is a product of an unconscious effort to correctly register oneself into the virtual environment in a consistent manner. We hypothesize that this process is perceptual, and bottomup in nature, and rooted in the reflexive and adaptive behavior to react and resolve the mismatch in the spatial cues between the physical space where the user is and the virtual space where the user looks at, hears from and interacts with. Hinted from the fact that our brain has two major paths for processing sensory input, the "where" path for determining object locations, and "what" path for identifying objects, we categorize the sensory stimulation cues in the virtual environment accordingly and investigate in their relationships as how they affect the user in adaptively registering oneself into the virtual environment, thus creating spatial presence. Based on the results of series of our experiments and other bodies of research, we postulate that while low level and perceptual spatial cues are sufficient for creating spatial presence, they can be affected and modulated by the spatial (whether form or content) factors. These results provide important insights into constructing a model of spatial presence, its measurement, and guidelines for configuring locationbased virtual reality applications.




Patents

AbstractAccording to the present invention, a lens flare generation method and apparatus are provided that may simulate lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at remarkably high speed as compared with the conventional art. Further, according to an embodiment of the present invention, a non-linear effect may be added to a linear pattern-based lens flare effect, generating an actual lens flare reflecting most of physical characteristics generated from the lens system. Further, use of a pre-recorded non-linear pattern allows for generation of a lens flare having a similar quality to the existing light tracking-based simulation at higher speed as compared with the conventional art without speed reduction.

AbstractA method for performing occlusion queries is disclosed. The method includes steps of: (a) a graphics processing unit (GPU) using a first depth buffer of a first frame to thereby predict a second depth buffer of a second frame; and (b) the GPU performing occlusion queries for the second frame by using the predicted second depth buffer, wherein the first frame is a frame predating the second frame. In accordance with the present invention, a configuration for classifying the objects into the occluders and the occludees is not required and the occlusion queries for the predicted second frame are acquired in advance at the last of the first frame or the first of the second frame.

Matthias Hullin, Sungkil Lee, Hans-Peter Seidel, and Elmar Eisemann.
Publication No.: WO2012/146303, Application No.: PCT/EP2011/056850, 2012.
AbstractA method and device for efficiently simulating lens flares produced by an optical system is provided. The method comprises the steps of - Simulating paths of rays from a light source through the optical system, the rays representing light; and Estimating, for points in a sensor plane, an irradiance, based on intersections of the simulated paths with the sensor plane.




27336, Dept. Software, College of Software, Sungkyunkwan University, Tel. +82-31-299-4917, Fax. +82 31-299-4969, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
Campus map (how to reach CGLab)