Direct learning and knowledge updating in neural nets
Conference Paper
Overview
Additional Document Info
View All
Overview
abstract
The conventional generalized delta rule and many supervisory learning methods are essentially based on iterative error-gradient algorithms; such training procedures are extremely slow and, very often, do not converge to optimal weights. If such neural networks were used in a dynamical environment, frequent off-line retraining would be inevitable. In this paper, training is interpreted as an acquisition or extraction of relevant information from a teacher, who has knowledge of the dynamical system or process models. Optimal weight convergency is equivalent to optimal extraction of useful knowledge and its conversion into the synaptic strength of a neural network. Basically, a priori information or knowledge is directly built into the structure and connections of a neural network during its design phase. The neural net is then instantly ready for on-line implementation as soon as the design is completed. During an on-line operation, the synaptic strengths can be continually updated by a recursive identification scheme as more input data become available.