|
|||||||
Abstract
We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation.
Paper preprints, slides, additional videos, GitHub, and Google Scholar
* Copyright Disclaimer: paper preprints in this page are provided only for personal academic uses, and not for redistribution.
Bibliography
@inproceedings{baek20:dton,
title={{Day-to-Night Road Scene Image Translation Using Semantic Segmentation}},
author={Seung Youp Baek and Sungkil Lee},
booktitle={{Pacific Graphics Posters}},
pages={47--48},
year={2020}
}
|
|||||||
|