The Wayback Machine - https://web.archive.org/web/20131011071435/http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(07)00217-3

Article Information

PubMed

Related Articles

  • …more

Get Bookmark



Copyright © 2007 Elsevier Ltd All rights reserved.
Trends in Cognitive Sciences, Volume 11, Issue 10, 428-434, 1 October 2007

doi:10.1016/j.tics.2007.09.004

Previous ArticleTable of ContentsNext Article

Review

Add/View Comments (0)

Learning multiple layers of representation

Geoffrey E. Hinton 

Department of Computer Science, University of Toronto, 10 King's College Road, Toronto, M5S 3G4, Canada



Abstract

To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it. Learning multilayer generative models might seem difficult, but a recent discovery makes it easy to learn nonlinear distributed representations one layer at a time.