Mapping tensor products onto VLSI networks with reduced I/O Conference Paper uri icon

abstract

  • This paper presents a methodology for designing folded VLSI networks for implementing tensor-product forms. Using tensor-products leads to very efficient expressions for a large number of computations in digital signal processing and matrix arithmetic. The resulting networks can trade-off total time delay with I/O bandwidth and chip area. The main goal is to parametrize the VLSI architecture so that it can be implemented under various packaging constraints including the available number of I/O pins, available chip-area, and certain restrictions on maximum wire length. Our methods result in folded VLSI networks with optimal AT2 trade-off for digital filtering and multidimensional transforms, where A is the total area of the VLSI circuit (or chip) and T is its total time delay.

name of conference

  • Proceedings of 4th Great Lakes Symposium on VLSI

published proceedings

  • Proceedings of 4th Great Lakes Symposium on VLSI

author list (cited authors)

  • Elnaggar, A., Alnuweiri, H. M., & Ito, M. R.

citation count

  • 3

complete list of authors

  • Elnaggar, A||Alnuweiri, HM||Ito, MR

publication date

  • January 1994