TF-MVP: Novel Sparsity-Aware Transformer Accelerator with Mixed-Length Vector Pruning
- Title
- TF-MVP: Novel Sparsity-Aware Transformer Accelerator with Mixed-Length Vector Pruning
- Authors
- Yoo, Eunji; Park, Gunho; Min, Jung Gyu; Jung Kwon, Se; Park, Baeseong; Lee, Dongsoo; Lee, Youngjoo
- Date Issued
- 2023-07-11
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Abstract
- We present the energy-efficient TF-MVP architecture, a sparsity-aware transformer accelerator, by introducing novel algorithm-hardware co-optimization techniques. From the previous fine-grained pruning map, for the first time, the direction strength is developed to analyze the pruning patterns quantitatively, indicating the major pruning direction and size of each layer. Then, the mixed-length vector pruning (MVP) is proposed to generate the hardware-friendly pruned-transformer model, which is fully supported by our TF-MVP accelerator with the reconfigurable PE structure. Implemented in a 28nm CMOS technology, as a result, TF-MVP achieves 377 GOPs/W for accelerating GPT-2 small model by realizing 4096 multiply-accumulate operators, which is 2.09 times better than the state-of-the-art sparsity-aware transformer accelerator.
- URI
- https://oasis.postech.ac.kr/handle/2014.oak/121304
- Article Type
- Conference
- Citation
- 60th ACM/IEEE Design Automation Conference, DAC 2023, 2023-07-11
- Files in This Item:
- There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.