Departmental Papers (ESE)

Document Type

Conference Paper

Date of this Version

November 2000

Comments

Reprinted from 12th Conference on Neural Information Processing Systems (NIPS 2000), November 2000, pages 87-93.

NOTE: At the time of publication, author Daniel D. Lee was affiliated with Bell Laboratories. Currently (March 2005), he is a faculty member in the Department of Electrical and Systems Engineering at the University of Pennsylvania.

Abstract

The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feedforward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent connections. The application of these new learning rules is illustrated on a simple two-dimensional input example.

Share

COinS
 

Date Posted: 30 April 2005