- Title
- Contrastive learning in random neural networks and its relation to gradient-descent learning
- Creator
- Romariz, Alexandre; Gelenbe, Erol
- Relation
- 26th International Symposium on Computer and Information Sciences. Computer and Information Sciences II: 26th International Symposium on Computer and Information Sciences (London 26-28 September, 2011) p. 511-517
- Publisher Link
- http://dx.doi.org/10.1007/978-1-4471-2155-8_65
- Publisher
- Springer
- Resource Type
- conference paper
- Date
- 2012
- Description
- We apply Contrastive Hebbian Learning to the recurrent Random Neural Network model. Under this learning rule, weight adaptation is performed based on the difference between the network’s dynamics when it is input-free and when a teaching signal is imposed. We show that the resulting weight changes are a first order approximation to the gradient-descent algorithm for quadratic error minimization when overall firing rates are constant. The algorithm requires no matrix inversions, and no constraints are placed on network connectivity. A learning result on the XOR problem is presented as an empirical confirmation of these ideas.
- Subject
- contrastive Hebbian learning; random neural network model; weight adaptation; teaching signal
- Identifier
- http://hdl.handle.net/1959.13/1357028
- Identifier
- uon:31855
- Identifier
- ISBN:9781447121541
- Language
- eng
- Reviewed
- Hits: 832
- Visitors: 811
- Downloads: 0
Thumbnail | File | Description | Size | Format |
---|