Research    Publications    Funding    Professor    People    Course
   pub.korean

Junyong Lee, Sungkil Lee, Sunghyun Cho, and Seungyong Lee

IEEE Conf. Computer Vision and Patt. Recog. (CVPR), 12222–12230, 2019.
Abstract
In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.
Paper preprints, slides, supplementary materials, and Google Scholar entry
* Copyright Disclaimer: paper preprints in this page are provided only for personal academic uses, and not for redistribution.
Bibliography
@INPROCEEDINGS{lee19:dmenet, title={{Deep Defocus Map Estimation using Domain Adaptation}}, author={Junyong Lee and Sungkil Lee and Sunghyun Cho and Seungyong Lee}, booktitle={{IEEE Conf. Computer Vision and Patt. Recog. (CVPR)}}, pages={12222--12230}, year={2019} }




27336, College of Software, Sungkyunkwan University, Tel. +82 31-299-4917, Seobu-ro 2066, Jangan-gu, Suwon, 16419, South Korea
Campus map (how to reach CGLab)