DC Field | Value | Language |
---|---|---|
dc.contributor.author | 지상우 | - |
dc.date.accessioned | 2023-08-31T16:34:02Z | - |
dc.date.available | 2023-08-31T16:34:02Z | - |
dc.date.issued | 2023 | - |
dc.identifier.other | OAK-2015-10165 | - |
dc.identifier.uri | http://postech.dcollection.net/common/orgView/200000660186 | ko_KR |
dc.identifier.uri | https://oasis.postech.ac.kr/handle/2014.oak/118362 | - |
dc.description | Doctor | - |
dc.description.abstract | Adversarial robustness is one of the most important properties of deep neural network (DNN) models because most DNN models are vulnerable to adversarial examples. As an adversarial example can induce misbehavior of DNN models, the arms race between adversarial attacks and defenses had begun. Since new adversarial attacks are devised to bypass existing defenses, identifying and preventing security threats of new adversarial attacks is necessary for building robust DNN systems. To build robust DNN systems, we address two types of new adversarial attacks: flicker fusion attacks and attacks tailored to transfer learning. First, we propose a novel flicker fusion attack method appropriate for attacking an arbitrary video. As a flicker fusion attack is a newly devised attack methodology, the feasibility of the attack via a different medium (e.g., video) needs to be examined. The key idea of the proposed attack method is suppressing luminance changes induced by the perturbation, as humans are more sensitive to luminance flickering than chromaticity flickering. After demonstrating the effectiveness of the proposed attack method, we suggest a countermeasure against flicker fusion attacks. Second, we propose a defense method against attacks tailored to transfer learning. As transfer learning opens a door for new attacks that generate adversarial examples using the pre-trained teacher model, a defense method against them is required. The key idea of the proposed defense method is to train a student model having a different feature representation from the teacher model. Triplet loss is used to put the mimic attack examples close to their source images and far from their target images in the feature space of the student model. The method is evaluated on three different transfer learning tasks with diverse configurations. Based on such an accomplishment, we present discussions and future works on both studies. | - |
dc.language | eng | - |
dc.publisher | 포항공과대학교 | - |
dc.title | Machine Learning Security on Video and Transfer Learning | - |
dc.title.alternative | 비디오와 전이 학습에서의 머신 러닝 보안 | - |
dc.type | Thesis | - |
dc.contributor.college | 컴퓨터공학과 | - |
dc.date.degree | 2023- 2 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
library@postech.ac.kr Tel: 054-279-2548
Copyrights © by 2017 Pohang University of Science ad Technology All right reserved.