Bandwidth-efficient on-chip interconnect designs for GPGPUs
Conference Paper
-
- Overview
-
- Research
-
- Identity
-
- Additional Document Info
-
- View All
-
Overview
abstract
-
© 2015 ACM. Modern computational workloads require abundant thread level parallelism (TLP), necessitating highly-parallel, many-core accelerators such as General Purpose Graphics Processing Units (GPGPUs). GPGPUs place a heavy demand on the on-chIP interconnect between the many cores and a few memory controllers (MCs). Thus, traffic is highly asymmetric, impacting on-chIP resource utilization and system performance. Here, we analyze the communication demands of typical GPGPU applications, and propose efficient Network-on-ChIP (NoC) designs to meet those demands. We show that the proposed schemes improve performance by up to 64.7%. Compared to the best of class prior work, our VC monopolizing and partitioning schemes improve performance by 25%.
name of conference
-
DAC '15: The 52nd Annual Design Automation Conference 2015
published proceedings
-
Proceedings of the 52nd Annual Design Automation Conference
author list (cited authors)
-
Jang, H., Kim, J., Gratz, P., Yum, K. H., & Kim, E. J.
citation count
complete list of authors
-
Jang, Hyunjun||Kim, Jinchun||Gratz, Paul||Yum, Ki Hwan||Kim, Eun Jung
publication date
publisher
published in
Research
keywords
-
Bandwidth
-
Gpgpu
-
Network-on-chip
Identity
Digital Object Identifier (DOI)
International Standard Book Number (ISBN) 13
Additional Document Info
start page
end page
volume