RDFNet RGB-D Multi-level Residual Feature Fusion for Indoor Semantic Segmentation
- Title
- RDFNet RGB-D Multi-level Residual Feature Fusion for Indoor Semantic Segmentation
- Authors
- LEE, SEUNGYONG; PARK, SEONG JIN; HONG, KI SANG
- Date Issued
- 2017-10-22
- Publisher
- ICCV
- Abstract
- In multi-class indoor semantic segmentation using RGB-D data, it has been shown that incorporating depth feature into RGB feature is helpful to improve segmentation accuracy. However, previous studies have not fully exploited the potentials of multi-modal feature fusion, e.g., simply concatenating RGB and depth features or averaging RGB and depth score maps. To learn the optimal fusion of multimodal features, this paper presents a novel network that extends the core idea of residual learning to RGB-D semantic segmentation. Our network effectively captures multilevel RGB-D CNN features by including multi-modal feature fusion blocks and multi-level feature refinement blocks. Feature fusion blocks learn residual RGB and depth features and their combinations to fully exploit the complementary characteristics of RGB and depth data. Feature refinement blocks learn the combination of fused features from multiple levels to enable high-resolution prediction. Our network can efficiently train discriminative multi-level features from each modality end-to-end by taking full advantage of skip-connections. Our comprehensive experiments demonstrate that the proposed architecture achieves the state-of-the-art accuracy on two challenging RGB-D indoor datasets, NYUDv2 and SUN RGB-D.
- URI
- https://oasis.postech.ac.kr/handle/2014.oak/41856
- Article Type
- Conference
- Citation
- The IEEE International Conference on Computer Vision (ICCV), page. 4980 - 4989, 2017-10-22
- Files in This Item:
- There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.