Intra-Clustering: Accelerating On-Chip Communication for Data Parallel Architectures Conference Paper uri icon

abstract

  • 2015 IEEE. Modern computation workloads contain abundant Data Level Parallelism (DLP), which requires specialized data parallel architectures, such as Graphics Processing Units (GPUs). With parallel programming models, such as CUDA and OpenCL, GPUs are easily to be programmed for non-graphics applications, and therefore become a cost effective approach for data parallel architectures. The large quantity of available parallelism places a heavy stress on the memory system as the limited number of pins confines the number of memory controllers on the chip. This creates a potential bottleneck for performance scalability of the GPUs. To accelerate communication with the memory system, we propose the Intra-Clustering on-chip network for data parallel architectures, which is built upon a traditional two-dimensional electrical mesh network with memory controllers connected through a nanophotonic ring and compute cores grouped into different clusters. Our evaluations with CUDA benchmarks show that the Intra-Clustering architecture can improve communication delay by an average of 17% (up to 32%) and IPC by an average of 5% (up to 11.5%).

name of conference

  • 2015 International Symposium on Computer Architecture and High Performance Computing Workshop (SBAC-PADW)

published proceedings

  • 2015 INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING WORKSHOP (SBAC-PADW)

author list (cited authors)

  • Yuan, W., Boyapati, R., Wang, L., Jang, H., Jin, Y., Yum, K. H., & Kim, E. J.

citation count

  • 1

complete list of authors

  • Yuan, Wen||Boyapati, Rahul||Wang, Lei||Jang, Hyunjun||Jin, Yuho||Yum, Ki Hwan||Kim, Eun Jung

publication date

  • October 2015