Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.author오용정-
dc.date.accessioned2023-08-31T16:32:52Z-
dc.date.available2023-08-31T16:32:52Z-
dc.date.issued2023-
dc.identifier.otherOAK-2015-10115-
dc.identifier.urihttp://postech.dcollection.net/common/orgView/200000660590ko_KR
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/118312-
dc.descriptionMaster-
dc.description.abstractFederated learning (FL) is a distributed machine learning (ML) technique for training a global model on a parameter server (PS) through collaboration with wireless devices, each with its own local training dataset. FL allows the PS to train the global model without direct access to the devices’ data and therefore can help preserve the privacy of the data generated at the devices. Thanks to this advantage, FL has received a great deal of attention as a means of enabling privacy-sensitive ML applications: however, bringing this technique into real-world AI applications also faces several challenges. A major challenge in FL is significant communication overhead required when transmitting the local model updates from the wireless devices to the PS. This problem becomes more severe as the global model on the PS becomes more sophisticated, because the amount of the communication overhead increases with the number of global model parameters. In this thesis, I propose two communication-efficient FL frameworks inspired by quantized compressed sensing (QCS): (i) a communication-efficient FL framework via scalar QCS and (ii) a communication-efficient FL framework via vector QCS. The common strategy of the proposed frameworks is to compress the local model update at each device by applying dimensionality reduction followed by scalar or vector quantization. By harnessing the benefits of both dimensionality reduction and quantization, the proposed frameworks effectively reduce the communication overhead of local model update transmissions. For accurate aggregation of local model updates from the compressed signals at the PS, I put forth an approximate minimum mean square error (MMSE) approach for global model update reconstruction using the expectation-maximization generalized-approximate-message-passing (EMGAMP) algorithm. Assuming a Bernoulli Gaussian-mixture prior, this algorithm iteratively updates the posterior mean and variance of local model updates from the compressed signals. I also present a low-complexity approach for the model update reconstruction. In this approach, I use the Bussgang theorem to aggregate local model updates from the compressed signals, then compute an approximate MMSE estimate of the aggregated model update using the EM-GAMP algorithm. Furthermore, to minimize the reconstruction error of the global model update under the constraint of link capacity, I optimize both the design of the quantizer and the key parameters for the compression. Considering the reconstruction error, I analyze the convergence rates of the proposed frameworks for a non-convex loss function. From these comprehensive studies, I demonstrate that the proposed frameworks have a great potential for enabling FL in real-world wireless networks with limited capacities.-
dc.languageeng-
dc.publisher포항공과대학교-
dc.titleCommunication-Efficient Federated Learning via Quantized Compressed Sensing-
dc.typeThesis-
dc.contributor.college전자전기공학과-
dc.date.degree2023- 2-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse