The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.
Competitive neural networks on message-passing parallel computers
Ceccarelli M;
1993-01-01
Abstract
The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.