Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.author김효민-
dc.date.accessioned2024-05-10T16:34:55Z-
dc.date.available2024-05-10T16:34:55Z-
dc.date.issued2024-
dc.identifier.otherOAK-2015-10357-
dc.identifier.urihttp://postech.dcollection.net/common/orgView/200000736100ko_KR
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/123309-
dc.descriptionDoctor-
dc.description.abstractThe field of 3D human reconstruction plays a crucial role in various applications, such as virtual reality, animation, and human-computer interaction. This dissertation aims to address the challenge of 3D human reconstruction using a single RGB-D camera, offering a more accessible approach for broader public use. However, a single camera cannot capture the entire body in one scan due to field-of-view limitations and occlusion. To maximize the utilization of the available data from each frame, we propose three core methods and an additional technique for the 3D human reconstruction process. First, we introduce "Deep Virtual Markers," a framework to accurately estimate dense positional features from articulated 3D models. This method utilizes a sparse convolutional neural network to associate 3D points with virtual marker labels, employing soft labels for advanced interclass relationship learning. Our framework demonstrated superior effectiveness in a public challenge. Second, we present a method for generating spatio-temporal texture maps for dynamic objects using a single RGB-D camera. Addressing the challenge of invisible areas in single-camera setups, we propose an MRF optimization-based solution to borrow textures from other frames, resulting in more realistic, dynamic textures. Our approach outperforms those reliant on a single texture map, offering a better representation of active appearances. Third, we introduce "LaplacianFusion," an innovative technique to reconstruct detailed 3D human body shapes from depth or 3D point cloud sequences. This method uses Laplacian coordinates on a human template mesh with a surface function for precise local detail representation. This method allows for high-quality reconstruction and manipulations of surface details, outperforming traditional techniques. While "LaplacianFusion" successfully retains fine details in 3D human body shapes using Laplacian coordinates, its output quality heavily depends on the input depth map’s quality, which may be noisy or inaccurate. Conversely, color images generally offer higher resolution and more detail. Given previous successes in normal estimation, we propose a supplementary technique that leverages this rich detail from color images to enhance the precision of our 3D human reconstruction process. Lastly, we focus on improving normal integration techniques for 3D reconstruction, addressing surface discontinuities often caused by self-occlusion. We develop a novel discretization approach, incorporating auxiliary edges to bridge smooth surfaces and explicitly represent hidden jumps. This approach results in a more detailed and accurate reconstruction of discontinuities.-
dc.languageeng-
dc.publisher포항공과대학교-
dc.title3D Human Reconstruction Using Single RGB-D Camera-
dc.typeThesis-
dc.contributor.college컴퓨터공학과-
dc.date.degree2024- 2-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse