* Copyright Disclaimer: paper preprints in this page are provided only for personal academic uses, and not for redistribution.
Journal and Conference Papers
Head-Mounted Displays (HMDs) are becoming increasingly popular as a crucial component of Virtual Reality (VR). However, contemporary HMDs enforce a simple optical structure due to their constrained form factor, which impedes the use of multiple lens elements that can reduce aberrations in general. As a result, they introduce severe aberrations and imperfections in optical imagery, causing visual fatigue and degrading the immersive experience of being present in VR. To address this issue without modifying the hardware system, we present a novel software-driven approach that compensates for the aberrations in HMDs in real time. Our approach involves pre-correction that deconvolves an input image to minimize the difference between its after-lens image and the ideal image. We characterize the specific wavefront aberration and Point Spread Function (PSF) of the optical system using Zernike polynomials. To achieve higher computational efficiency, we improve the conventional deconvolution based on hyper-Laplacian prior by adopting a regularization constraint term based on L2 optimization and the input-image gradient. Furthermore, we implement our solution entirely on a Graphics Processing Unit (GPU) to ensure constant and scalable real-time performance for interactive VR. Our experiments evaluating our algorithm demonstrate that our solution can reliably reduce the aberration of the after-lens images in real time.
(Submitted) Nov. 07, 2023, (Revised) Jan. 15, 2024, (Accepted) Feb. 06, 2024, (Published) Mar. 04, 2024 IF=1.9, PCTL=31.5. ISSN: 1559-128X, Optica Publishing Group, USA |
This paper presents the model and rendering algorithm of Potentially Visible Hidden Volumes (PVHVs) for multi-view image warping. PVHVs are 3D volumes that are occluded at a known source view, but potentially visible at novel views. Given a bound of novel views, we define PVHVs using the edges of foreground fragments from the known view and the bound of novel views. PVHVs can be used to batch-test the visibilities of source fragments without iterating individual novel views in multi-fragment rendering, and thereby, cull redundant fragments prior to warping. We realize the model of PVHVs in Depth Peeling (DP). Our Effective Depth Peeling (EDP) can reduce the number of completely hidden fragments, capture important fragments early, and reduce warping cost. We demonstrate the benefit of our PVHVs and EDP in terms of memory, quality, and performance in multi-view warping.
(Submitted) Jan. 25, (Accepted) Apr. 28, (Published) Jul. 26, 2023 IF=6.2, PCTL=90.3. ISSN: 0730-0301, ACM, USA |
This paper presents a novel Deep Learning (DL) model that estimates camera parameters, including camera rotations, field of view, and distortion parameter, from single-view images. The classical approach often analyzes geometric cues such as vanishing points, but is constrained only when geometric cues exist in images. To alleviate such constraints, we use DL, and employ implicit geometric cues, which can reflect the inter-image changes of camera parameters and be observed more frequently in images. Our geometric cues are inspired by two important intuitions: 1) geometric appearance changes caused by camera parameters are the most prominent in object edges; 2) spatially consistent objects (in size and shape) better reflect the inter-image changes of camera parameters. To realize our approach, we propose a weighted edge-attention mechanism that assigns higher weights onto the edges of spatially consistent objects. Our experiments prove that our edge-driven geometric emphasis significantly improves the estimation accuracy of the camera parameters than the existing DL-based approaches.
(Submitted) Nov. 09, 2022, (Accepted) Feb. 13, 2023, (Published) Feb. 16, 2023 IF=3.476, PCTL=62.14. ISSN: 2169-3536, IEEE, USA |
Joo Young Chun, Hyun-Jin Kim, Ji-Won Hur, Dooyoung Jung, Heon-Jeong Lee, Seung Pil Pack, Sungkil Lee, Gerard Kim, Chung-Yean Cho, Seung-Moo Lee, Hyeri Lee, Seungmoon Choi, Taesu Cheong, and Chul-Hyun Cho
JMIR Serious Games, 10(3), e38284, 2022.
Background: Social anxiety disorder (SAD) is a fear of social situations where a person anticipates being evaluated negatively. Changes in autonomic response patterns are related to the expression of anxiety symptoms. Virtual reality (VR) sickness can inhibit the VR experiences. Objective: This study predicts the severity of specific anxiety symptoms and VR sickness in patients with SAD using machine learning based on in-situ autonomic physiological signals (heart rate and galvanic skin response) during VR treatment sessions. Methods: This study had 32 participants with SAD taking part in six VR sessions. During each VR session, all participants’ heart rate and galvanic skin response were measured in real-time. We assessed specific anxiety symptoms using the Internalized Shame Scale (ISS), the post-event rumination scale (PERS), and VR sickness using the simulator sickness questionnaire (SSQ) during four VR sessions (#1, #2, #4 and #6). Logistic regression, random forest, and naive Bayes classification classified and predict the severity groups in the ISS, PERS, and SSQ subdomains based on in-situ autonomic physiological signal data. Results: The severity of social anxiety disorder was predicted with three machine learning models. According to the F1 score, the highest prediction performance among each domain for severity was as follows: The F1 score of the ISS mistake anxiety subdomain was 0.8421 using the logistic regression model, the PERS positive subdomain was 0.7619 using the naïve Bayes classifier, and the total VR sickness was 0.7059 using the random forest model. Conclusions: This study could predict specific anxiety symptoms and VR sickness during VR intervention by autonomic physiological signals alone in real-time. Machine learning models predict individuals' severe and non-severe psychological states based on in-situ physiological signal data during VR intervention for real-time interactive services. These models support the diagnosis of specific anxiety symptoms and VR sickness with minimal participant bias. Clinical Trial: CRIS Registration Number-KCT0003854.
(Submitted) May 27, 2022, (Accepted) July 21, 2022, (Published) Sep. 16, 2022 IF=3.364, PCTL=60.09. ISSN: 2291-9279, JMIR PUBLICATIONS, INC, Canada |
This article presents a real-time bokeh rendering technique that splats pre-computed sprites but takes dynamic visibilities and intrinsic appearances into account at runtime. To attain alias-free looks without excessive sampling on a lens, the visibilities of strong highlights are densely sampled using rasterization, while regular objects are sparsely sampled using conventional defocus-blur rendering. The intrinsic appearance is dynamically transformed from a precomputed look-up table, which encodes radial aberrations against image distances in a compact 2D texture. Our solution can render complex bokeh effects without undersampling artifacts in real time, and greatly improve the photorealism of defocus-blur rendering.
(Submitted) Mar. 16, (Revised) July 22, (Accepted) Aug. 2, 2020, (Online-published) Aug. 5, 2020, (Published) Feb. 1, 2022 IF=5.226, PCTL=88.64. ISSN: 1077-2626, IEEE Computer Society, USA |
This paper presents a scalable online occlusion culling algorithm, which significantly improves the previous raster occlusion culling using object-level bounding volume hierarchy. Given occluders found with temporal coherence, we find and rasterize coarse groups of potential occludees in the hierarchy. Within the rasterized bounds, per-pixel ray casting tests fine-grained visibilities of every individual occludees. We further propose acceleration techniques including the read-back of counters for tightly-packed multidrawing and occluder filtering. Our solution requires only constant draw calls for batch occlusion tests, while avoiding costly iteration for hierarchy traversal. Our experiments prove our solution outperforms the existing solutions in terms of scalability, culling efficiency, and occlusion-query performance.
(Submitted) Oct. 5, (Conditionally accepted) Dec. 15, 2020, (Accepted) Feb. 8, 2021 IF=2.078, PCTL=55.09. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
Ji-Won Hur, Hyemin Shin, Dooyoung Jung, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, Chul-Hyun Cho
JMIR Mental Health, 8(4), e25731, 2021.
Background: Although it has been well demonstrated that the efficacy of VR therapies for social anxiety disorder (SAD) is comparable to traditional cognitive-behavioral therapy, little is known about the effect of VR on the pathological self-referential processes in SAD. Objective: This study aims to determine the changes in self-referential processing and their neural mechanisms following VR treatment. Methods: We obtained scans from 25 participants with a primary diagnosis of SAD. Then, the subjects received VR-based exposure treatment starting immediately after the baseline MRI scan and clinical assessments and continuing for six sessions. Eventually, 21 SAD subjects completed follow-up scans after the sixth session of VR therapy in which the subjects were asked to judge whether a series of words (positive, negative, neutral) was relevant to themselves. Twenty-two age-, sex-, and handedness-matched controls also underwent baseline clinical assessments and fMRI scans. Results: The whole-brain analysis revealed that compared with the controls, the SAD group had increased neural responses during positive self-referential processing in the medial temporal and frontal cortexes. This group also showed increased left insular activation and decreased right middle frontal gyrus activation during negative self-referential processing. After undergoing VR-based therapy, the subjects with SAD rated negative words as less relevant (P = .066) and positive words as more relevant (P = .064) to themselves at the postintervention session than at baseline. Their overall symptoms, as measured with the Social Phobia Scale (SPS) and Post-Event Rumination Scale (PERS), were reduced accordingly. We also found that these subjects displayed greater activity in a group of brain regions responsible for self-referential and autobiographical memory processes while viewing positive words at the postintervention fMRI scan. Compared with that at baseline, higher activation was found within broad somatosensory areas of the subjects with SAD during negative self-referential processing following VR therapy. Conclusions: The current fMRI findings reflect the enhanced physiological and cognitive processing of individuals with SAD in response to self-referential information. They also provide neural evidence of the effect of VR exposure therapy on social anxiety and self-derogation. Clinical Trial: CRIS Registration Number-KCT0003854
(Submitted) Nov. 13, 2020, (Accepted) Mar. 12, 2021, (Published) Apr. 14, 2021 IF=3.535, PCTL=70. ISSN: 2368-7959, JMIR Publications, Inc, Canada |
The single-pair all-shortest-path problem is to find all possible shortest paths, given a single source-destination pair in a graph. Due to the lack of efficient algorithms for single-pair all-shortest-path problem, many applications used diverse types of modifications to the existing shortest-path algorithms such as Dijkstra’s algorithm. Such approaches can facilitate the analysis of medium-sized static networks, but the heavy computational cost impedes their use for massive and dynamic real-world networks. In this paper, we present a novel single-pair all-shortest-path algorithm, which performs well on massive networks as well as dynamic networks. The efficiency of our algorithm stems from novel 2-hop label-based query processing on large-size networks. For dynamic networks, we also demonstrate how to incrementally maintain all shortest paths in 2-hop labels, which allows our algorithm to handle the topological changes of dynamic networks such as insertion or deletion of edges. We carried out experiments on real-world large datasets, and the results confirms the effectiveness of our algorithms for the single-pair all-shortest-path computation and the incremental maintenance of 2-hop labels.
(Accepted) Aug. 27, 2020, (Online Published) Sep. 23, 2020, (Published) Feb. 6, 2021 IF=5.91, PCTL=94.551. ISSN: 0020-0255, Elsevier Science Inc., USA |
Hyun-Jin Kim, Seulki Lee, Dooyoung Jung, Ji-Won Hur, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, and Chul-Hyun Cho
J. Med. Internet Res, 22(10), e23024:1–16, 2020.
Background: Social anxiety disorder (SAD) is characterized by excessive fear of negative evaluation and humiliation in social interactions and situations. Virtual reality (VR) treatment is a promising intervention option for SAD. Objective: The purpose of this study was to create a participatory and interactive VR intervention for SAD. Treatment progress, including the severity of symptoms and the cognitive and emotional aspects of SAD, was analyzed to evaluate the effectiveness of the intervention. Methods: In total, 32 individuals with SAD and 34 healthy control participants were enrolled in the study through advertisements for online bulletin boards at universities. A VR intervention was designed consisting of three stages (introduction, core, and finishing) and three difficulty levels (easy, medium, and hard) that could be selected by the participants. The core stage was the exposure intervention in which participants engaged in social situations. The effectiveness of treatment was assessed through Beck Anxiety inventory (BAI), State‐Trait Anxiety Inventory (STAI), Internalized Shame Scale (ISS), Post-Event Rumination Scale (PERS), Social Phobia Scale (SPS), Social Interaction Anxiety Scale (SIAS), Brief-Fear of Negative Evaluation Scale (BFNE), and Liebowitz Social Anxiety Scale (LSAS). Results: In the SAD group, scores on the BAI (F=4.616, P=.009), STAI-Trait (F=4.670, P=.004), ISS (F=6.924, P=.001), PERS-negative (F=1.008, P<.001), SPS (F=8.456, P<.001), BFNE (F=6.117, P=.004), KSAD (F=13.259, P<.001), and LSAS (F=4.103, P=.009) significantly improved over the treatment process. Compared with the healthy control group before treatment, the SAD group showed significantly higher scores on all scales (P<.001), and these significant differences persisted even after treatment (P<.001). In the comparison between the VR treatment responder and nonresponder subgroups, there was no significant difference across the course of the VR session. Conclusions: These findings indicated that a participatory and interactive VR intervention had a significant effect on alleviation of the clinical symptoms of SAD, confirming the usefulness of VR for the treatment of SAD. VR treatment is expected to be one of various beneficial therapeutic approaches in the future. Trial Registration: Clinical Research Information Service (CRIS) KCT0003854.
(Submitted) July 31, (Accepted) Sep. 16, 2020, (Published) Oct. 6, 2020 IF=5.034, PCTL=95.016. ISSN: 1438-8871, JMIR Publications Inc., Canada |
Young Im Kim, Seo-Yeon Jung, Seulki Min, Eunbi Seol, Sungho Seo, Ji-Won Hur, Dooyoung Jung, Heon-Jeong Lee, Sungkil Lee, Gerard J. Kim, Chung-Yean Cho, Seungmoon Choi, Seung-Moo Lee, and Chul-Hyun Cho
Psychiatry Investig, 16(2), 167-171, 2019.
With proper guidance, virtual reality (VR) can provide psychiatric therapeutic strategies within a simulated environment. The visuo-haptic-based multimodal feedback VR solution has been developed to improve anxiety symptoms through immersive experience and feedback. A proof-of-concept study was performed to investigate this VR solution. Nine subjects recently diagnosed with panic disorder were recruited, and seven of them eventually completed the trial. Two VR sessions were provided to each subject. Depression, anxiety, and VR sickness were evaluated before and after each session. Although there was no significant effect of the VR sessions on psychiatric symptoms, we could observe a trend of improvement in depression, anxiety, and VR sickness. The VR solution was effective in relieving subjective anxiety, especially in panic disorder without comorbidity. VR sickness decreased over time. This study is a new proof-of-concept trial to evaluate the therapeutic effect of VR solutions on anxiety symptoms using visuo-haptic-based multimodal feedback simultaneously.
Feb. 21, 2019 IF=1.333, PCTL=25.752. ISSN: 1738-3684, Korean Neuropsychiatric Assoc, South Korea |
In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.
June 16, 2019
|
Smartphone users often want to customize the positions and functions of physical buttons to accommodate their own usage patterns; however, this is unfeasible for electronic mobile devices based on COTS (Commercial Off-The-Shelf) due to high production costs and hardware design constraints. In this letter, we present the design and implementation of customized virtual buttons that are localized using only common built-in sensors of electronic mobile devices. We develop sophisticated strategies firstly to detect when a user taps one of the virtual buttons, and secondly to locate the position of the tapped virtual button. The virtual-button scheme is implemented and demonstrated in a COTS-based smartphone. The feasibility study shows that, with up to nine virtual buttons on five different sides of the smartphone, the proposed virtual buttons can operate with greater than 90% accuracy.
Apr. 1, 2019 IF=0.368, PCTL=1.917. ISSN: 0916-8508, IEICE, Japan |
We present a scalable solution to render complex scenes from a large amount of view points. While previous approaches rely either on a scene or a view hierarchy to process multiple elements together, we make full use of both, enabling sublinear performance in terms of views and scene complexity. By concurrently traversing the hierarchies, we efficiently find shared information among views to amortize rendering costs. One example application is many-light global illumination. Our solution accelerates shadow map generation for virtual point lights, whose number can now be raised to over a million while maintaining interactive rates.
Mar. 16, 2019 IF=2.373, PCTL=68.692. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
This article presents an iterative backward-warping technique and its applications. It predictively synthesizes depth buffers for novel views. Our solution is based on a fixed-point iteration that converges quickly in practice. Unlike the previous techniques, our solution is a pure backward warping without using bidirectional sources. To efficiently seed the iterative process, we also propose a tight bounding method for motion vectors. Non-convergent depth holes are inpainted via deep depth buffers. Our solution works well with arbitrarily distributed motion vectors under moderate motions. Many scenarios can benefit from our depth warping. As an application, we propose a highly scalable image-based occlusion-culling technique, achieving a significant speedup compared to the state of the art. We also demonstrate the benefit of our solution in multi-view soft-shadow generation.
(submitted) Sep. 21, 2017, (accepted) Jul. 12, 2018, (published) Oct. 23, 2018 IF=4.384, PCTL=97.596. ISSN: 0730-0301, ACM, USA |
Augmented reality (AR) augments virtual information over the real-world medium and is emerging as an important type of an information visualization technique. As such, the visibility and readability of the augmented information must be as high as possible amidst the dynamically changing real-world surrounding and background. In this work, we present a technique based on image saliency analysis to improve the conspicuity of the foreground augmentation to the background real-world medium by adjusting the local brightness contrast. The proposed technique is implemented on a mobile platform considering the usage nature of AR. The saliency computation is carried out for the augmented object’s representative color rather than all the pixels, and searching and adjusting over only a discrete number of brightness levels to produce the highest contrast saliency, thereby making real-time computation possible. While the resulting imagery may not be optimal due to such a simplification, our tests showed that the visibility was still significantly improved without much difference to the optimal ground truth in terms of correctly perceiving and recognizing the augmented information. In addition, we also present another experiment that explores in what fashion the proposed algorithm can be applied in actual AR applications. The results suggested that the users clearly preferred the automatic contrast modulation upon large movements in the scenery.
Sep. 1, 2018 IF=1.375, PCTL=36.73. ISSN: 1359-4338, Springer, England |
Image enhancement tasks can highly benefit from depth information, but the direct estimation of outdoor depth maps is difficult due to vast object distances. This paper presents a fully automatic framework for model-based synthesis of outdoor depth maps and its applications to image enhancements. We leverage 3D terrain models and camera pose estimation techniques to render approximate depth maps without resorting to manual alignment. Potential local misalignments, resulting from insufficient model details and rough registrations, are eliminated with our novel free-form warping. We first align synthetic depth edges with photo edges using the as-rigid-as-possible image registration and further refine the shape of the edges using the tight trimap-based alpha matting. The resulting synthetic depth maps are accurate, calibrated in the absolute distance. We demonstrate their benefit in image enhancement techniques including reblurring, depth-of-field simulation, haze removal, and guided texture synthesis.
August 1, 2018 IF=1.2, PCTL=46.635. ISSN: 0097-8493, Pergamon-Elsevier Science Ltd, England |
Far-field diffraction can be evaluated using the Discrete Fourier Transform (DFT) in image space but it is costly due to its dense sampling. We propose a technique based on a closed-form solution of the continuous Fourier transform for simple vector primitives (quads) and propose a hierarchical and progressive evaluation to achieve real-time performance. Our method is able to simulate diffraction effects in optical systems and can handle varying visibility due to dynamic light sources. Furthermore, it seamlessly extends to near-field diffraction. We show the benefit of our solution in various applications, including realistic real-time glare and bloom rendering.
Jul. 1, 2018 IF=2.046, PCTL=79.327. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
Abnormal messages propagated from faulty operations in a vehicular system may severely harm the system, but they cannot be easily detected when their information is not known in advance. To support an efficient detection of faulty message patterns propagated in the in-vehicle network, this paper presents a novel graph pattern matching framework built upon a message log-driven graph modeling. Our framework models the unknown condition as a query graph and the reference database of normal operations as data graphs. The analysis of the faulty message propagation requires to consider the sequence of events in the distance measure, and thus, the conventional graph distance measures cannot be directly used for our purpose. We hence propose a novel distance metric based on the maximum common subgraph (MCS) between two graphs and the sequence numbers of messages, which works robustly even for the abnormal faulty patterns and can avoid false negatives in large databases. Since the problem of MCS computation is NP-hard, we also propose two efficient filtering techniques, one based on the lower bound of MCS distance for a polynomial-time approximation and the other based on edge pruning. Experiments performed on real and synthetic datasets to assess our framework show that ours significantly outperforms the previously existing methods in terms both of performance and accuracy of query responses.
Feb. 24, 2018 IF=2.278, PCTL=81.154. ISSN: 0164-1212, Elsevier Science Inc, USA |
This paper presents a scalable parser framework using graphics processing units (GPUs) for massive text-based files. Specifically, our solution is designed to efficiently parse Wavefront OBJ models texts of which specify 3D geometries and their topology. Our work bases its scalability and efficiency on chunk-based processing. The entire parsing problem is subdivided into subproblems the chunk of which can be processed independently and merged seamlessly. The within-chunk processing is made highly parallel, leveraged by GPUs. Our approach thereby overcomes the bottlenecks of the existing OBJ parsers. Experiments performed to assess the performance of our system showed that our solutions significantly outperform the existing CPU-based solutions and GPU-based solutions as well.
IF=0.878, PCTL=20.433. ISSN: 1000-9000, Springer, China |
Cellular internet-of-things (CIoT) systems are recently developed by the third-generation partnership project (3GPP) to support internet-of-things (IoT) services over the conventional mobile-communication infrastructures. The CIoT systems allow a large number of IoT devices to be connected through the random-access procedure, but the concurrent accesses of the massive devices make this procedure heavily competitive. In this article, we present an effective time-division random-access scheme built upon the coverage levels (CLs), where each CIoT device is assigned a CL and categorized based on its radio-channel quality. In our scheme, the random-access loads of device groups having different CLs are distributed into different time periods, which greatly relaxes instantaneous contention and improves random-access performance. To assess the performance of our scheme, we also introduce a mathematical model that expresses and analyzes the states and behaviors of CIoT devices using the Markov chain. Mathematical analysis and simulation results show that our scheme significantly outperforms the conventional scheme (without time-division control) in terms of collision probability, succeeding access rate, and access-blocking probability.
Feb. 01, 2018 IF=2.974, PCTL=73.79. ISSN: 1574-1192, Elsevier, Netherlands |
This paper presents a novel heterogeneous volume deformation technique and an intuitive volume animation authoring framework. Our volume deformation extends the previous technique based on moving least squares with a density-aware weighting metric for data-driven importance control and efficient upsampling-based volume synthesis. For user interaction, we present an intuitive visual metaphor and interaction schemes to support effective spatiotemporal editing of volume deformation animation. Our framework is implemented fully on graphics processors and thus suitable for quick-and-easy prototyping of volume deformation with improved controllability.
Feb. 6, 2018 IF=0.697, PCTL=12.981. ISSN: 1546-4261, Wiley, England |
In order to facilitate low-cost network connection of many devices, machine-type communication (MTC) has evolved to low-cost MTC (LC-MTC) in the third-generation partnership project (3GPP) standard. LC-MTC should be able to effectively handle intensive accesses through multiple narrow-band (NB) random-access channels (RACHs) assigned within the bandwidth of a long-term evolution (LTE) system. As the number of MTC devices and their congestion rapidly increase, the random-access scheme for LC-MTC RACH needs to be improved. This paper presents a novel random-access scheme that introduces virtual preambles of LC-MTC devices and associates them with RACH indices to effectively discern LC-MTC devices. In comparison to the sole use of preambles, our scheme allows an LC-MTC device to better choose a unique virtual preamble. Thereby, the probability of successful accesses of LC-MTC devices increases in contention-based random-access environments. We experimentally assessed our scheme and the results show that our scheme performs better than the existing preamble-based scheme in terms of collision probability, access delay, and access blocking probability.
July 1, 2017 IF=4.066, PCTL=89.003. ISSN: 0018-9545, IEEE, USA |
Many visual tasks in modern personal devices such smartphones resort heavily to graphics processing units (GPUs) for their fluent user experiences. Because most GPUs for embedded systems are nonpreemptive by nature, it is important to schedule GPU resources efficiently across multiple GPU tasks. We present a novel spatial resource sharing (SRS) technique for GPU tasks, called a budget-reservation spatial resource sharing (BR-SRS) scheduling, which limits the number of GPU processing cores for a job based on the priority of the job. Such a priority-driven resource assignment can prevent a high-priority foreground GPU task from being delayed by background GPU tasks. The BR-SRS scheduler is invoked only twice at the arrival and completion of jobs, and thus, the scheduling overhead is minimized as well. We evaluated the performance of our scheduling scheme in an Android-based smartphone, and found that the proposed technique significantly improved the performance of high-priority tasks in comparison to the previous temporal budget-based multi-task scheduling.
May 5, 2017 IF=1.579, PCTL=52.585. ISSN: 1383-7621, Elsevier, Netherlands |
A virtualized system generally suffers from low I/O performance, mainly caused by its inherent abstraction overhead and frequent CPU transitions between the guest and hypervisor modes. The recent research of polling-based I/O virtualization partly solved the problem, but excessive polling trades intensive CPU usage for higher performance. This article presents a power-efficient and high-performance block I/O framework for a virtual machine, which allows us to use it even with a limited number of CPU cores in mobile or embedded systems. Our framework monitors system status, and dynamically switches the I/O process mode between the exit and polling modes, depending on the amounts of current I/O requests and CPU utilization. It also dynamically controls the polling interval to reduce redundant polling. The highly dynamic nature of our framework leads to improvements in I/O performance with lower CPU usage as well. Our experiments showed that our framework outperformed the existing exit-based mechanisms by 10.8 % higher I/O throughput, maintaining similar CPU usage by only 3.1 % increment. In comparison to the systems solely based on the polling mechanism, ours reduced the CPU usage roughly down to 10.0 % with no or negligible performance loss.
March. 23, 2017 IF=1.326, PCTL=40.704. ISSN: 0920-8542, Springer, USA |
We present an efficient ray-tracing technique to render bokeh effects produced by parametric aspheric lenses. Contrary to conventional spherical lenses, aspheric lenses do generally not permit a simple closed-form solution of ray-surface intersections. We propose a numerical root-finding approach, which uses tight proxy surfaces to ensure a good initialization and convergence behavior. Additionally, we simulate mechanical imperfections resulting from the lens fabrication via a texture-based approach. Fractional Fourier transform and spectral dispersion add additional realism to the synthesized bokeh effect. Our approach is well-suited for execution on graphics processing units (GPUs) and we demonstrate complex defocus-blur and lens-flare effects.
Jun. 22, 2016 IF=1.542, PCTL=84.434. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
This article presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. Our model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. We further propose a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods.
June, 2016 IF=1.06, PCTL=53.302. ISSN: 0178-2789, Springer, Germany |
Internet of things recently emerges as a common platform and service for consumer electronics. This paper presents an interactive framework of visualizing and authoring IoT in indoor environments such as home or small office. Building blocks of the framework are virtual sensors and actuators that abstract physical things and their virtual behaviors on top of their physical networks. Their behaviors are abstracted and programmed through visual authoring tools on the web, which allows a casual consumer to easily monitor and define their behaviors even without knowing the underlying physical connections. The user study performed to assess the usability of the visual authoring showed that the visual authoring is easy to use, understandable, and also preferred to typical text-based script programming
Aug., 2015 IF=1.045, PCTL=48.641. ISSN: 0098-3063, IEEE, USA |
Tactile feedback coordinated with visual stimuli has proven its worth in mediating immersive multimodal experiences, yet its authoring has relied on content artists. This article presents a fully automated framework of generating tactile cues from streaming images to provide synchronized visuotactile stimuli in real time. The spatiotemporal features of video images are analyzed on the basis of visual saliency and then mapped into the tactile cues that are rendered on tactors installed on a chair. We also conducted two user experiments for performance evaluation. The first experiment investigated the effects of visuotactile rendering against visual-only rendering, demonstrating that the visuotactile rendering improved the movie watching experience to be more interesting, immersive, and understandable. The second experiment was performed to compare the effectiveness of authoring methods and found that the automated authoring approach, used with care, can produce plausible tactile effects similar in quality to manual authoring.
Sep. 17, 2014 IF=2.03, PCTL=81.25. ISSN: 1939-1412, IEEE Computer Society, USA |
Visualization techniques often use color to present categorical differences to a user. When selecting a color palette, the perceptual qualities of color need careful consideration. Large coherent groups visually suppress smaller groups, and are often visually dominant in images. This article introduces the concept of class visibility used to quantitatively measure the utility of a color palette to present coherent categorical structure to the user. We present a color optimization algorithm based on our class visibility metric to make categorical differences clearly visible to the user. We performed two user experiments on user preference and visual search to validate our visibility measure over a range of color palettes. The results indicate that visibility is a robust measure, and our color optimization can increase the effectiveness of categorical data visualizations.
Oct., 2013 IF=1.898, PCTL=88.095. ISSN: 1077-2626, IEEE Computer Society, USA |
This paper presents a GPU-based rendering algorithm for real-time defocus blur effects, which significantly improves the accumulation buffering. The algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering.
Sep. 13, 2013 IF=1.638, PCTL=82.381. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
We present a practical real-time approach for rendering lens-flare effects. While previous work employed costly ray tracing or complex polynomial expressions, we present a coarser, but also significantly faster solution. Our method is based on a first-order approximation of the ray transfer in an optical system, which allows us to derive a matrix that maps lens-flare-producing light rays directly to the sensor. The resulting approach is easy to implement and produces physically-plausible images at high framerates on standard off-the-shelf graphics hardware.
Jul. 18, 2013 IF=1.638, PCTL=82.381. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
This paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.
June 13, 2012
|
Lens flare is caused by light passing through a photographic lens system in an unintended way. Often considered a degrading artifact, it has become a crucial component for realistic imagery and an artistic means that can even lead to an increased perceived brightness. So far, only costly offline processes allowed for convincing simulations of the complex light interactions. In this paper, we present a novel method to interactively compute physically-plausible flare renderings for photographic lenses. The underlying model covers many components that are important for realism, such as imperfections, chromatic and geometric lens aberrations, and antireflective lens coatings. Various acceleration strategies allow for a performance/quality tradeoff, making our technique applicable both in real-time applications and in high-quality production rendering. We further outline artistic extensions to our system.
Jul. 25, 2011 IF=3.632, PCTL=97.475. ISSN: 0730-0301, ACM, USA |
This article evaluates the usability of motion sensing-based interaction on a mobile platform using image browsing as a representative task. Three types of interfaces, a physical button interface, a motion-sensing interface using a high-precision commercial 3D motion tracker, and a motion-sensing interface using an in-house low-cost 3D motion tracker, are compared in terms of task performance and subjective preference. Participants were provided with prolonged training over 20 days, in order to compensate for the participants’ unfamiliarity with the motion-sensing interfaces. Experimental results showed that the participants’ task performance and subjective preference for the two motion-sensing interfaces were initially low, but they rapidly improved with training and soon approached the level of the button interface. Furthermore, a recall test, which was conducted 4 weeks later, demonstrated that the usability gains were well retained in spite of the long time gap between uses. Overall, these findings highlight the potential of motion-based interaction as an intuitive interface for mobile devices.
IF=1.192, PCTL=48.214. ISSN: 0953-5438, Oxford Univ Press, Netherlands |
We present a novel rendering system for defocus-blur and lens effects. It supports physically-based rendering and outperforms previous approaches by involving a novel GPU-based tracing method. Our solution achieves more precision than competing real-time solutions and our results are mostly indistinguishable from offline rendering. Our method is also more general and can integrate advanced simulations, such as simple geometric lens models enabling various lens aberration effects. These latter are crucial for realism, but are often employed in artistic contexts too. We show that available artistic lenses can be simulated by our method. In this spirit, our work introduces an intuitive control over depth-of-field effects. The physical basis is crucial as a starting point to enable new artistic renderings based on a generalized focal surface to emphasize particular elements in the scene while retaining a realistic look. Our real-time solution provides realistic, as well as plausible expressive results.
IF=3.619, PCTL=98.387. ISSN: 0730-0301, ACM, USA |
We present a GPU-based real-time rendering method that simulates a high-quality depth-of-field blur, similar in quality to multiview accumulation methods. Most real-time approaches have difficulties to obtain good approximations of visibility and view-dependent shading due to the use of a single view image. Our method also avoids the multiple rendering of a scene, but can approximate different views by relying on a layered image-based scene representation. We present several performance and quality improvements, such as early culling, approximate cone tracing, and jittered sampling. Our method achieves artifact-free results for complex scenes and reasonable depth-of-field blur in real time.
IF=3.383, PCTL=97.093. ISSN: 0730-0301, ACM, USA |
This article presents a real-time GPU-based postfiltering method for rendering acceptable depth-of-field effects suited for virtual reality. Blurring is achieved by nonlinearly interpolating mipmap images generated from a pinhole image. Major artifacts common in the postfiltering techniques such as a bilinear magnification artifact, intensity leakage, and blurring discontinuity are practically eliminated via magnification with a circular filter, anisotropic mipmapping, and smoothing of blurring degrees. The whole framework is accelerated using GPU programs for constant and scalable real-time performance required for virtual reality. We also compare our method to recent GPU-based methods in terms of image quality and rendering performance.
IF=2.445, PCTL=91.279. ISSN: 1077-2626, IEEE Computer Society, USA |
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments (VEs). In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive VEs. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in VEs, without any hardware for head or eye tracking.
IF=2.445, PCTL=91.279. ISSN: 1077-2626, IEEE Computer Society, USA |
We present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications.
IF=1.107, PCTL=58.929. ISSN: 0167-7055, Wiley Blackwell Publishing, England |
This article reports two human experiments to investigate the effects of visual cues and sustained attention on spatial presence over a period of prolonged exposure in virtual environments. Inspired by the two functional subsystems subserving spatial and object vision in the human brain, visual cues and sustained attention were each classified into spatial and object cues, and spatial and non-spatial attention, respectively. In the first experiment, the effects of visual cues on spatial presence were examined when subjects were exposed to virtual environments configured with combinations of spatial and object cues. It was found that both types of visual cues enhanced spatial presence with saturation over a period of prolonged exposure, but the contribution of spatial cues became more relevant with longer exposure time. In the second experiment, subjects were asked to carry out two tasks involving sustained spatial attention and sustained non-spatial attention. We observed that spatially directed attention improved spatial presence more than non-spatially directed attention did. Furthermore, spatial attention had a positive interaction with detailed object cues.
IF=0.969, PCTL=59.729. ISSN: 0953-5438, Oxford Univ Press, Netherlands |
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.
|
While hand-held computing devices are capable of rendering advanced 3D graphics and processing of multimedia data, they are not designed to provide and induce sufficient sense of immersion and presence for virtual reality. In this paper, we propose minimal requirements for realizing VR on a hand-held device. Furthermore, based on the proposed requirements, we have designed and implemented a low cost hand-held VR platform by adding multimodal sensors and display components to a hand-held PC. The platform enables a motion based interface, an essential part of realizing VR on a small hand-held device, and provides outputs in three modalities, visual, aural and tactile/haptic for a reasonable sensory experience. We showcase our platform and demonstrate the possibilities of hand-hand VR through three VR applications: a typical virtual walkthrough, a 3D multimedia contents browser, and a motion based racing game.
ISSN: 1081-1451, , |
Presence is one of the goals of many virtual reality systems. Historically, in the context of virtual reality, the concept of presence has been associated much with spatial perception (bottom up process) as its informal definition of "feeling of being there" suggests. However, recent studies in presence have challenged this view and attempted to widen the concept to include psychological immersion, thus linking more high level elements (processed in a top down fashion) to presence such as story and plots, flow, attention and focus, identification with the characters, emotion, etc. In this paper, we experimentally studied the relationship between two content elements, each representing the two axis of the presence dichotomy, perceptual cues for spatial presence and sustained attention for (psychological) immersion. Our belief was that spatial perception or presence and a top down processed concept such as voluntary attention have only a very weak relationship, thus our experimental hypothesis was that sustained attention would positively affect spatial presence in a virtual environment with impoverished perceptual cues, but have no effect in an environment rich in them. In order to confirm the existence of the sustained attention in the experiment, fMRI of the subjects were taken and analyzed as well. The experimental results showed that that attention had no effect on spatial presence, even in the environment with impoverished spatial cues.
|
Spatial presence, among the many aspects of presence, is the sense of physical and concrete space, often dubbed as the sense of "being there." This paper theorizes on how "spatial" presence is formed by various types of artificial cues in a virtual environment, form or content. We believe that spatial presence is a product of an unconscious effort to correctly register oneself into the virtual environment in a consistent manner. We hypothesize that this process is perceptual, and bottomup in nature, and rooted in the reflexive and adaptive behavior to react and resolve the mismatch in the spatial cues between the physical space where the user is and the virtual space where the user looks at, hears from and interacts with. Hinted from the fact that our brain has two major paths for processing sensory input, the "where" path for determining object locations, and "what" path for identifying objects, we categorize the sensory stimulation cues in the virtual environment accordingly and investigate in their relationships as how they affect the user in adaptively registering oneself into the virtual environment, thus creating spatial presence. Based on the results of series of our experiments and other bodies of research, we postulate that while low level and perceptual spatial cues are sufficient for creating spatial presence, they can be affected and modulated by the spatial (whether form or content) factors. These results provide important insights into constructing a model of spatial presence, its measurement, and guidelines for configuring locationbased virtual reality applications.
|
Conference Posters, Talks, and WIPs
We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.
Dec. 10, 2020
|
This poster presents an iterative hierarchical Raster Culling Occlusion (ROC) technique that can scale up to massive scenes with higher geometry complexities. Unlike the original ROC, we do not handle individual objects, but use their hierarchical structures such as a Bounding Volume Hierarchy (BVH). The BVH is iteratively traversed from a moderate depth down to a deeper level (but not the bottom of the tree). The interior nodes are occlusion-tested in batch. The granularity for the culling is coarser, but the light-weight occlusion test with fewer draw calls leads to a great speedup in the overall rendering performance.
July 13, 2020
|
Conventional image-based rendering has limited applicability for large-scale spaces. In this study, we demonstrate an efficient alternative to conventional image-based rendering. Our key approach is based on a spatial template (ST), which solely includes architectural geometric primitives. The predictability of ST improves the efficiency of acquisition, storage, and rendering. Thereby, our system can be applied to the modeling and rendering of larger indoor spaces.
|
Lens flare, comprising diffraction patterns of direct lights and ghosts of an aperture, is one of artistic artifacts in optical systems. The generation of far-field diffraction patterns has commonly used Fourier transform of the iris apertures. While such outcomes are physically faithful, more flexible and intuitive editing of diffraction patterns has not been explored so far. In this poster, we present a novel scheme of diffraction synthesis, which additively integrates diffraction elements. We decompose the apertures into curved edges and circular core so that they abstract non-symmetric streaks and circular core highlights, respectively. We then apply Fourier transform for each, rotate them, and finally composite them into a single output image. In this way, we can easily generate diffraction patterns similarly to that of the source aperture and more exaggerated ones, as well.
May 9, 2016
|
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user’s spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256x256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.
|
Volume editing with moving least squares is one of the effective schemes to achieve continuous and smooth deformation of existing volumes, but its interactive authoring has not been explored extensively. We present a framework for interactive editing of volume data with free-form deformation, which provides intuitive and interactive feedback on the fly. Given control points, we extend moving least squares with their visual metaphor to further encompass non-spatial attributes including lightness, density, and hues. Furthermore, a full GPU implementation of our framework achieves with instant real-time feedback with quick-and-easy volume editing metaphor.
|
This paper presents a recursive tessellation scheme, which can represent virtually infinitesimal details beyond the typical limits of graphics hardware at run time further with multiple levels of displacement mapping.
|
Yuna Jeong, Kangtae Kim, and Sungkil Lee
ACM SIGGRAPH Posters, 2012.
Selected in the Semifinal list of SIGGRAPH Student Research Competition
This paper presents a new DOF rendering algorithm, based on the distributed rasterization [Haeberli and Akeley 1990] and LOD management. Our solution allows us to maintain the benefit of the objectbased approach without spatiotemporal quality loss, while achieving real-time performance. A key idea is that geometric degradation of models is not perceived when they are blurred. Hence, lower details can be used for blurred models, greatly improving performance. Another challenge here is avoiding temporal popping artifacts resulting from transitions between adjacent discrete levels. To avoid this problem, we propose a novel blending method for LOD.
|
Patents
Jungbum Kim (김정범), Sungkil Lee (이성길), Sangjun Ahn (안상준), Yuna Jeong (정유나), Sooryum Choi (최수렴), Yuri Roh (노유리), Soyoung Park (박소영), and Yoonji Choi (최윤지)
Patent No. US 11,252,400 B2, Application No. 16/758,305, 2022.
Disclosed is a method, performed by a device, of processing an image, the method including: for an original image at a particular time point among a plurality of original images having a sequential relationship in terms of time, determining a cumulative value due to an afterimage of another original image before the particular time point; based on the determined cumulative value and the plurality of original images, obtaining a plurality of blur compensation images for removing a blur caused by the afterimage; and outputting the obtained plurality of blur compensation images.
(Application) Nov. 20, 2018, (Patent) Feb. 15, 2022
|
Provided are a method for rearranging webcomic content and a device therefor. A method for rearranging webcomic content according to one embodiment of the present invention comprises the steps of: obtaining first content including a plurality of image cuts composed of a plurality of elements; extracting the plurality of elements included in the first content; generating a plurality of image cut layers by reconstructing the plurality of extracted elements; and arranging the plurality of generated image cut layers in a specified arrangement so as to generate second content.
(US Application) June 8, 2023, (PCT Application) Dec. 6, 2021, (Publication) Jun. 16, 2022
|
Disclosed is a method for rearranging image cuts of cartoon content according to various embodiments. This method for rearranging cartoon content is performed by a computing device and includes the steps of: loading first content in which a plurality of image cuts are arrayed two-dimensionally; extracting a plurality of cut areas, in which the plurality of image cuts from the first content are positioned, respectively; determining the arrayed order of the plurality of image cuts; and generating second content by rearranging the plurality of cut areas according to the arrayed order.
(Application) Aug. 7, 2020. (Publication) Nov. 2, 2021.
|
According to the present invention, a lens flare generation method and apparatus are provided that may simulate lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at remarkably high speed as compared with the conventional art. Further, according to an embodiment of the present invention, a non-linear effect may be added to a linear pattern-based lens flare effect, generating an actual lens flare reflecting most of physical characteristics generated from the lens system. Further, use of a pre-recorded non-linear pattern allows for generation of a lens flare having a similar quality to the existing light tracking-based simulation at higher speed as compared with the conventional art without speed reduction.
|
A lens flare generation method and apparatus simulates lens flare effects through paraxial approximation-based linear approximation to generate a lens flare utilizing physical characteristics of a lens system while generating a lens flare at high speed. A non-linear effect may be added to a linear pattern-based lens flare effect to generate an actual lens flare reflecting most of physical characteristics generated from the lens system. A pre-recorded non-linear pattern may be used.
|
A method for performing occlusion queries is disclosed. The method includes steps of: (a) a graphics processing unit (GPU) using a first depth buffer of a first frame to thereby predict a second depth buffer of a second frame; and (b) the GPU performing occlusion queries for the second frame by using the predicted second depth buffer, wherein the first frame is a frame predating the second frame. In accordance with the present invention, a configuration for classifying the objects into the occluders and the occludees is not required and the occlusion queries for the predicted second frame are acquired in advance at the last of the first frame or the first of the second frame.
|
A method and device for efficiently simulating lens flares produced by an optical system is provided. The method comprises the steps of - Simulating paths of rays from a light source through the optical system, the rays representing light; and Estimating, for points in a sensor plane, an irradiance, based on intersections of the simulated paths with the sensor plane.
|
Ph.D. Theses
This dissertation presents a GPU-based rendering algorithm for real-time defocus blur and bokeh effects, which significantly improve perceptual realism of synthetic images and can emphasize user’s attention. The defocus blur algorithm combines three distinctive techniques: (1) adaptive discrete geometric level of detail (LOD), made popping-free by blending visibility samples across the two adjacent geometric levels; (2) adaptive visibility/shading sampling via sample reuse; (3) visibility supersampling via height-field ray casting. All the three techniques are seamlessly integrated to lower the rendering cost of smooth defocus blur with high visibility sampling rates, while maintaining most of the quality of brute-force accumulation buffering. Also, the author presents a novel parametric model to include expressive chromatic aberrations in defocus blur rendering and its effective implementation using the accumulation buffering. The model modifies the thin-lens model to adopt the axial and lateral chromatic aberrations, which allows us to easily extend them with nonlinear and artistic appearances beyond physical limits. For the dispersion to be continuous, we employ a novel unified 3D sampling scheme, involving both the lens and spectrum. Further, the author shows a spectral equalizer to emphasize particular dispersion ranges. As a consequence, our approach enables more intuitive and explicit control of chromatic aberrations, unlike the previous physically-based rendering methods. Finally, the dissertation presents an efficient bokeh rendering technique that splats pre-computed sprites but takes dynamic visibilities and appearances into account at runtime. To achieve alias-free look without excessive sampling resulting from strong highlights, the author efficiently sample visibilities using rasterization from highlight sources. Our splatting uses a single precomputed 2D texture, which encodes radial aberrations against object depths. To further integrate dynamic appearances, the author also proposes an effective parameter sampling scheme for focal distance, radial distortion, optical vignetting, and spectral dispersion. The method allows us to render complex appearances of bokeh efficiently, which greatly improves the photorealism of defocus blur.
|
This dissertation presents a real-time perceptual rendering framework based on computational visual attention tracking in a virtual environment (VE). The visual attention tracking identifies the most plausibly attended objects using top-down (goal-driven) contexts inferred from a user’s navigation behaviors as well as a conventional bottom-up (feature-driven) saliency map. A human experiment was conducted to evaluate the prediction accuracy of the framework by comparing objects regarded as attended to with human gazes collected with an eye tracker. The experimental results indicate that the accuracy is in the level well supported by human cognition theories. The attention tracking framework, then, is applied to depth-of-field (DOF) rendering and level-of-detail (LOD) management, which are representative techniques to improve perceptual quality and rendering performance, respectively. Prior to applying the attention tracking to DOF rendering, we propose two GPU-based real-time DOF rendering methods, since there have been few methods plausible for interactive VEs. One method extends the previous mipmap-based approach, and the other, the previous layered and scatter approaches. Both DOF rendering methods achieve real-time performance without major artifacts present in previous methods. With the DOF rendering methods, we demonstrate attention-guided DOF rendering and LOD management, which use the depths and the levels of attention of attended objects as focal depths and fidelity levels, respectively. The attention-guided DOF rendering can simulate an interactive lens blur effect without an eye tracker, and the attention-guided LOD management can significantly improve rendering performance with little perceptual degradation.
|
Scalable Dynamic Rasterization for Postprocessing 강준원. 석사학위논문. 2021. |
Hybrid Voxel Tracing for Real-Time Global Illumination 석예찬. 석사학위논문. 2021. |
Road Scene Image Translation from Day to Night using Semantic Segmentation Seung Youp Baek. 석사학위논문. 2021. |
Efficient Object Visibility Culling with Screen-Space Ray Casting 이기범. 석사학위논문. 2021. |
Efficient and Effective Stratification-Based Technique for Stochastic Sampling 고지은. 석사학위논문. 2021. |
가변 크기 타일 기반 웹 엔진 렌더링 정용걸. 석사학위논문. 2020. |
Real-Time Indirect Illumination Rendering with Dual Paraboloid Map 최재원. 석사학위논문. 2020. |
Intuitive Volume Deformation Authoring Framework Using Moving Least Squares with Density-Aware Weighting 권순현. 석사학위논문. 2020. |
Single-pass stereo rendering with bidirectional image warping 김재명. 석사학위논문. 2020. |
Depth Range Shift and Compression for Real-Time Depth-of-Field Rendering 이제선. 석사학위논문. 2020. |
Primitive-based Crack Synthesis with Guidance Vector Field 정효진. 석사학위논문. 2019. |
Experimental Quality Assessment of Ultra-High-Definition Resolution Image Upscaling of Postprocessing Effects 노유리. 석사학위논문. 2019. |
Real-time Intrinsic Image Decomposition using Reconstructed Indoor Scene for Dynamic Relighting 최윤지. 석사학위논문. 2019. |
Fast User-Weighted Viewpoint/Lighting Control for Multi-Object Scene 김태문. 석사학위논문. 2019. |
Real-time light source estimation from geometry and texture of indoor scene 박소영. 석사학위논문. 2019. |
Scalable Parser for Massive OBJ Models based on GPU 조성훈. 석사학위논문. 2018. |
Interactive Expressive Editing of Lens Flare Effect 이상민. 석사학위논문. 2017. |
Efficient Bokeh Synthesis with Ray Tracing through Aspheric Lenses 주현태. 석사학위논문. 2017. |
Efficient Occlusion Culling Using Depth Warping 김영욱. 석사학위논문. 2016. |
Interactive Free-Form Authoring of Volume Animation 김기혁. 석사학위논문. 2016. |
Perceptual Color Enhancement for OLED Display 김강태. 석사학위논문. 2015. |
High-Level Modular Algorithm Design for GPGPU Computing 정주현. 석사학위논문. 2015. |
Highly Adaptive Terrain Rendering Using Recursive Tessellation 이현진. 석사학위논문. 2015. |
이미지 생성을 통한 트랜스포머 기반 헤드 모션 예측 알고리즘 변효근, 정문수, 이성길. Journal of KIISE (정보과학회논문지). 2024.07.16, ISSN: 2383-630X, Vol.51, No. 7, pp. 601-608. 2024. |
국내 CS 분야 우수학술대회목록 친화도 분석 이상원, 이성길. 정보과학회지. 2024.4, ISSN: 1229-6821, Vol.42, No. 4, pp. 71-75. 2024. |
스마트폰으로 촬영한 사진의 잡음제거를 위한 단일 이미지 이중 애버리징 기법 조훈민, 이성길. Journal of KIISE (정보과학회논문지). 2023.1, ISSN: 2383-630X, Vol.50, No. 1, pp. 40-46. 2023. |
GPU 기반 고정밀 적응형 정점 깊이 렌더링 강준원, 이성길. Journal of KIISE (정보과학회논문지). 2021.7, ISSN: 2383-630X, Vol.48, No. 7, pp. 756-763. 2021. |
실시간 전역조명을 위한 프리미티브와 복셀 기반의 하이브리드 추적 기법 석예찬, 이성길. Journal of KIISE (정보과학회논문지). 2021.7, ISSN: 2383-630X, Vol.48, No. 7, pp. 748-755. 2021. |
다트 던지기 기법을 사용한 개선된 격자구조 샘플링 고지은, 이성길. Journal of KIISE (정보과학회논문지). 2021.5.15, ISSN: 2383-630X, Vol.48, No. 5, pp. 527-532. 2021. |
단일패스 스테레오 렌더링을 위한 양방향 와핑 김재명, 최재원, 이성길. Journal of KIISE (정보과학회논문지). 2019.12, ISSN: 2383-630X, Vol.46, No. 12, pp. 1215-1221. 2019. |
깊이 범위 이동 및 압축을 이용한 실시간 필드심도 렌더링 이제선, 이성길. Journal of KIISE (정보과학회논문지). 2019.12, ISSN: 2383-630X, Vol.46, No. 12, pp. 1215-1221. 2019. |
이중 포물면 맵 기반 실시간 간접조명 렌더링 최재원, 이성길. Journal of KIISE (정보과학회논문지). 2019.11, ISSN: 2383-630X, Vol.46, No. 11, pp. 1099-1105. 2019. |
증강현실을 위한 GPU 기반 실시간 광원 추정 기법 박소영, 조성훈, 이성길. Journal of KIISE (정보과학회논문지). 2019.01, ISSN: 2383-630X, Vol.46, No. 01, pp. 001-008. 2019. |
다객체 모델에 대한 사용자 중요도 기반 시점/조명 조절 기법 김태문, 이성길. Journal of KIISE (정보과학회논문지). 2018.09, ISSN: 2383-630X, Vol.45, No. 9, pp. 888-894. 2018. |
프리미티브와 안내 벡터필드 기반 균열맵 합성 정효진, 정유나, 이성길. Journal of KIISE (정보과학회논문지). 2018.10, ISSN: 2383-630X, Vol.45, No. 10, pp. 996-1003. 2018. |
복원된 실내 환경의 3D 기하 기반 실시간 재질 추정 기법 최윤지, 이성길. Journal of KIISE (정보과학회논문지). 2018.09, ISSN: 2383-630X, Vol.45, No. 9, pp. 881-887. 2018. |
GPU 기반 후처리 효과에 대한 업스케일링의 효용성 실험 노유리, 이성길. Journal of KIISE (정보과학회논문지). 2018.07, ISSN: 2383-630X, Vol.45, No. 7, pp. 618-625. 2018. |
모바일 가상현실 기술과 응용 최수미, 박우찬, 김용국, 이종원, 장윤, 박준, 이성길, 이명원. 정보처리학회지. 2018.07, ISSN: 1226-9182, Vol.25, No. 2, pp. 21-38. 2018. |
룩업테이블 기반 실시간 비선형 렌즈플레어 렌더링 방법 조성훈, 정유나, 이성길. Journal of KIISE (정보과학회논문지). 2017.03, ISSN: 1229-683X, Vol.44 No.3, pp.253-260. 2017. |
밉맵 기반의 지연된 부드러운 그림자 매핑 김성구, 이성길. Journal of KIISE (정보과학회논문지). 2016.04, ISSN: 1229-683X, Vol.43 No.4, pp.399-403. 2016. |
묵시적 동기화 기반의 고성능 다중 GPU 렌더링 김영욱, 이성길. JOK2015. 2015.11, ISSN: 2383-630X, Vol.42 No.11, pp.1332-1338. 2015. |
Metro 스타일 GUI의 가시화 효율 최적화 김강태, 김기혁, 이성길. KTCP. 2014.12.01, ISSN: 2383-6318, Vol. 20, No. 12, pp. 670--675. 2014. |
하이레벨 GPGPU 프로그래밍 모델과 알고리즘의 기술 동향 정주현, 이상민, 이동규, 이성길. KSBEM. 2014.07.30, ISSN: 1226-7961, Vol.19 No.3, pp.65-75. 2014. |
선형 회귀분석 기반 합산영역테이블 정밀도 향상 기법 정주현, 이성길. KIPS Tr. Software and Data Eng.. 2013.11.30, ISSN: 2287-5905, Vol.2 No.11, pp.809-814. 2013. |
GPU에서 픽셀 분산을 이용한 실시간 필드심도 렌더링 이성길, 김정현, 최승문. J. Digital Entertainment. ISSN: 2005-0178, Vol.1 No.1, pp.45-49. 2007. |
색으로서의 활용한 깊이를 사용한 NeRF 렌더링 임석현, 김민성, 이성길. KCGS. 2024.07.09~2024.07.12, 경주 소노벨. 2024. |
상대적 깊이 기반 수묵화 스타일 렌더링 김민성, 윤세린, 김상민, 이성길. KCGS. 2022.07.13~2022.07.15, pp. 127--128. 2022. |
모션 조작 요소 기반의 볼륨 변형 애니메이션 저작 및 생성 기법 김장훈, 유도영, 이성길. KSC. 2021.12.20~2021.12.22, 강원도 평창 휘닉스파크 호텔. 2021. |
노노그램을 사용한 이미지 암호화 조훈민, 정문수, 김종민, 이성길. HCI. 2020.08.19~2020.08.21, 홍천 소노벨 비발디파크. 2020. |
GPU 기반 실시간 멀티 바운스 주변 폐색 렌더링 안현장, 최재원, 이성길. KCGS 2020. 2020.07.07~2020.07.09, 온라인 학술대회. 2020. |
다트 던지기 기법을 사용한 개선된 격자구조 샘플링 고지은, 이성길. KCC 2020. 2020.07.2~2020.07.4, 온라인 학술대회. 2020. |
시각적 주목도의 비대칭 균형 기반 몬드리안 컴포지션 기법 이기범, 백승엽, 최재원, 정유나, 이성길. KCGS. 2019.07.03~2019.07.05, 속초 대명 델피노 리조트. 2019. |
VR에서의 사용자와 물체 간의 거리 가늠을 위한 개선안 김태욱, 정효진, 이성길. KSC. 2018.12.19~2018.12.21, 강원도 평창 휘닉스파크 호텔. 2018. |
GPU 기반의 병렬화된 점진적 지터드 샘플링 고지은, 정효진, 이성길. KSC. 2018.12.19~2018.12.21, 강원도 평창 휘닉스파크 호텔. 2018. |
깊이 기반 실시간 반투명 유리 렌더링 이제선, 최재원, 박소영, 최윤지, 김재명, 이성길. KCGS. 2018.07.11~2018.07.13, 전라남도 여수 디오션리조트 그랜드볼룸. 2018. |
GPU 기반 후처리 알고리즘의 해상도 업스케일링 실험 노유리, 정유나, 이성길. KHCI. 2018.01.31~2018.02.02, 강원도 하이원 리조트 컨벤션센터, pp.300-303. 2018. |
사실적인 털 렌더링을 위한 텍스처 기반 주변 폐색 렌더링 김재명, 권순현, 이성길. KHCI. 2018.01.31~2018.02.02, 강원도 하이원 리조트 컨벤션센터, pp.297-299. 2018. |
지연된 음영처리 기반 부드러운 그림자 매핑 김성구, 이성길. KCC. 2015.06.24~2015.06.26, 제주대학교. 2015. |
시각적 주목도를 이용한 정지영상의 움직임 생성 기법 이상민, 이성길. KHCI. 2014.12.10~2014.12.12, 그랜드 힐튼 서울 컨벤션 센터, pp. 1-3. 2015. |
GPU 기반 연결 리스트를 이용한 해시 테이블의 생성 정주현, 이성길. KCGS. 2014.07.16~2014.07.18, 변산반도 대명리조트, pp.141-142. 2014. |
자동차 전면유리 와이퍼 패턴의 회절 렌더링 김영욱, 김강태, 이성길. KCC. 2014.06.25~2014.06.27, 부경대학교, pp.1358-1360, 우수논문상. 2014. |
Metro 스타일 GUI의 가시화 효율 최적화 김기혁, 김강태, 이성길. KCC. 2014.06.25~2014.06.27, 부경대학교, pp.1297-1299, 우수논문상. 2014. |
관심 기반 양안수렴 입체 렌더링 김영욱, 이성길. HCI Korea. 2014.02.12~2014.02.14, 강원도 하이원 리조트 컨벤션센터, pp.539-542. 2014. |
시각장애인을 위한 온각 기반 감정 전달 시스템 노효주, 김강태, 이성길. 정보처리학회 추계학술대회. 2013.11.08~2013.11.09, 제주한라대학교, pp.1659-1660. 2013. |
선형 회귀분석을 이용한 합산 영역 테이블의 정밀도 향상 정주현, 이성길. 정보처리학회 춘계학술대회. 2013.05.10~2013.05.11, 부경대학교, pp.386-388, 우수논문상. 2013. |
GPGPU 프로그래밍 모델의 기술 동향 이현진, 정유나, 이성길. 정보처리학회 춘계학술대회. 2013.05.10~2013.05.11, 부경대학교, pp.389-391. 2013. |
Metro 스타일 GUI를 위한 색 선택 최적화 김강태, 정유나, 이성길. HCI Korea. 2013.01.30~2013.02.01, 강원도 하이원 리조트 컨벤션센터, pp.365-367. 2013. |
앤티앤티앨리어싱 정주현, 문성호, 이성길. HCI Korea. 2013.01.30~2013.02.01, 하이원 리조트 컨벤션센터, pp.828-830. 2013. |
그래픽스 기반 달리줌 렌더링 김강태, 정유나, 이성길. 정보처리학회 춘계학술대회. 2012.04.26~2012.04.28, 순천대학교, pp.464-465. 2012. |
다중 해상도 필드 심도 렌더링 정유나, 이성길. HCI Korea. 2012.01.11~2012.01.13, 강원도 알펜시아리조트 컨벤션센터, pp.212-214. 2012. |
이방성으로 필터링된 밉맵의 보간을 이용한 실시간 필드심도 렌더링 이성길, 김정현, 최승문. HCI Korea. pp.33-38, 우수논문상. 2008. |
동작인식 및 촉감제공 게임 컨트롤러 전석희, 김상기, 박건혁, 한갑종, 이성길, 최승문, 최승진, 어홍준. HCI Korea. pp.1-6. 2008. |
3D 가상환경에서의 관심 추적 이성길, 김정현. KCGS. 2006.07.03~2006.07.04, 안면도 오션캐슬, pp.129-134. 2006. |
웹툰 컨텐츠 재배치 방법 및 그 장치 서충현, 장재혁, 이성길, 정문수, 권순현. 특허등록. 2023.04.26, 등록번호: 10-2527899. 2023. |
영상 기반 모델의 재질 추정 방법 및 장치 이성길, 최윤지, 박소영. 특허등록. 2020.04.29, 등록번호: 10-2108480. 2020. |
만화 컨텐츠의 재배치 방법 장재혁, 박찬규, 이성길, 권순현, 박소영. 특허출원. 2019.08.07, 출원번호: 10-2019-0096330. 2019. |
확장성 있는 GPU기반의 대용량 OBJ 파일 처리 방법 및 장치 이성길, 조성훈, 정유나, 유범재. 특허등록. 2018.02.02 등록번호: 10-1827395. 2018. |
이성길. 특허등록. 2016.09.13 등록번호: 10-1658883. 2016. |
이성길. 특허등록. 2014.11.20 등록번호:10-1465658. 2014. |
이성길, 정주현. 특허등록. 2014.08.12, 등록번호: 10-1431715. 2014. |
이성길, 김기혁, 김영욱, 유범재. 특허출원. 2014.07.03, 출원번호: 10-2014-0083129. 2014. |
실시간 시각 및 촉감 피드백을 위한 주목도 기반의 물리적효과 저작 최승문, 김명찬, 이성길. 특허등록. 2013.09.02, 등록번호: 10-1305735, 출원번호: 10-2012-0064031. 2013. |
증강현실을 위한 GPU 기반 실시간 광원 추정 기법 이성길. KSC2020. 2020.12.22, 한국정보과학회, 소사이어티우수논문상. 2020. |
이중 포물면 맵 기반 실시간 간접조명 렌더링 최재원, 이성길. 정보과학회논문지, 제 46권 제 11호, pp. 1099--1105. 2020.07.03, 한국정보과학회, 우수논문상. 2020. |
다트 던지기 기법을 사용한 개선된 격자구조 샘플링 고지은, 이성길. KCC2020. 2020.06.30, 한국정보과학회, 최우수논문상. 2020. |
VR에서의 사용자와 물체 간의 거리 가늠을 위한 개선안 김태욱, 정효진, 이성길. KSC2018. 2019.02.04, 한국정보과학회, 학부생/주니어 논문경진대회 장려상. 2018. |
지연된 음영처리 기반 부드러운 그림자 매핑 김성구, 이성길. KCC2015. 2015.06.24~2015.06.26, 제주대학교, 우수논문상. 2015. |
자동차 전면유리 와이퍼 패턴의 회절 렌더링 김영욱, 김강태, 이성길. KCC2014. 2014.06.25~2014.06.27 부경대학교, 우수발표논문상. 2014. |
Metro 스타일 GUI의 가시화 효율 최적화 김기혁, 김강태, 이성길. KCC2014. 2014.06.25~2014.06.27, 부경대학교, 우수발표논문상. 2014. |
선형 회귀분석을 이용한 합산 영역 테이블의 정밀도 향상 정주현, 이성길. 정보처리학회 춘계학술대회. 2013.05.10~2013.05.11, 부경대학교, 우수논문상. 2013. |
|