Mapping of neural network models onto massively parallel hierarchical computer systems
Academic Article
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
This paper investigates the proposed implementation of neural networks on massively parallel hierarchical computer systems with hypernet topology. The proposed mapping scheme takes advantage of the inherent structure of hypernets to process multiple copies of the neural network in the different subnets, each executing a portion of the training set, and finally combines the weight changes in the subnets to adjust the synaptic weights in all the copies. An expression is derived to estimate the time for all-to-all broadcasting, the principal mode of communication in the parallel implementation of neural networks. This is later used to estimate the time required for executing various execution phases in the neural network algorithm, and thus, to estimate the speedup performance of the proposed implementation.