Mapping of neural network models onto two-dimensional processor arrays
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
Proper implementation strategies should be followed to fully exploit the potential of artificial neural nets (ANNs). This in turn depends on the selection of an appropriate mapping scheme, especially while implementing neural nets in parallel processing environment. In this paper, we discuss the mapping of ANNs onto two-dimensional processor arrays. It has been shown that by following a diagonal assignment of the neurons and by suitably distributing the weight values across the array, computations involved in the operation of neural nets can be carried out with minimum communication overhead. The mapping has been illustrated on two popular neural net models: the Hopfield net and the multilayer perceptron (MLP) with back-propagation learning. One iteration of an n neuron Hopfield net takes 4(P - 1).[n/P] unit shifts in a P P processor array. For the same target architecture, a single iteration of back-propagation learning on an MLP of n neurons takes a maximum of 8(P - 1).[n/P] shifts.