SHF:Small:High Performance On-Chip Interconnects Design for Multicore Accelerators Grant uri icon


  • Advances in technology have made it possible to accommodate an increasing number of transistors on a die, enabling Multicore Accelerators like Graphics Processing Units (GPUs) by integrating diverse components on a single chip. GPUs have recently gained attention as a cost-effective approach for data parallel architectures, and the fast scaling of the GPUs increases the importance of designing an ideal on-chip interconnection network, which significantly impacts the overall system performance. In this project, we propose to develop a framework for high-performance and energy-efficient on-chip network mechanisms in synergy with Multicore Accelerator architectures. The desirable properties of a target on-chip network include re-usability across a wide range of Multicore Accelerator architectures, maximization of the use of routing resources, and support for reliable and energy-efficient data transfer. This project will make significant advances in understanding the interplay between Multicore Accelerator and Network-on-Chip (NoC) architectures, which leads us to scalable solutions for performance, area and energy. While the major communication of Chip Multiprocessor (CMP) systems is core-to-core for shared caches, major traffic of Multicore Accelerators is core-to-memory, which makes the memory controllers hot spots. Since Multicore Accelerators execute many threads in order to hide memory latency, it is critical for the underlying NoC to provide high bandwidth. The key contributions expected from the project are: (1) building a simulation testbed and analyzing the behavior of on-chip traffic workloads in Multicore Accelerators; (2) proposing mechanisms for a high-performance and energy-efficient NoC by utilizing emerging memory and NoC technologies in addition to novel topologies and routing mechanisms; (3) developing methodologies at the NoC level that will support data prefetching mechanisms in the Multicore Accelerators; and (4) providing multicast support and packet coalescing in the on-chip network to guarantee better system throughput. The results from this project are likely to foster new research directions in several areas of Computer Architecture and Parallel Computing. Also, high-performance and energy-aware computing and communication research is applicable to other areas, such as Embedded Systems and Cloud Computing. We will develop web-based tutorials to present and disseminate the results of this project, including tools and techniques, to a broad audience.

date/time interval

  • 2014 - 2018