A performance analysis of sparse neural associative memory

Sanjay Biswas, University of Pennsylvania

Abstract

According to one of the folk tenets neural associative memories are robust, i.e. computation in them is not substantially affected by damage to network components, whereas the other tenet says that dense interconnectivity is a sine qua non for efficient usage of resources. In this thesis we analyse the validity of these folk tenets. We show that the second tenet is invalid at least for the case of recurrent networks when it is used as associative memory. We show that a special kind of sparse recurrent architecture, which we call block architecture, where neurons are partitioned into a number of fully interconnected blocks makes the second tenet invalid. We prove when size of blocks are $\Omega$ (log n) we could construct codes of exponential size for which the network can correct errors from a $\rho n$ ball (for some $\rho<1/8)$ around every codeword with probability almost 1. We show, however, that there are other codes for which the performance of the network deteriorates. Next we examine the validity of the first folk tenet. We have seen in the previous paragraph that there are codes for which performance of the block architecture deteriorates. We show this deterioration is not substantial provided size of the blocks are $\Omega$ (log n). We investigate the truthfulness of this result for another architecture which we call randomly sparsed architecture. It is our conjecture that for random sparsity, too, we can have error correction from an arbitrary $\rho n$ ball $(\rho<1/2)$ provided the degree of the interconnection graph is $\Omega$ (log n). We investigate the effect of sparsity in one non-neural paradigm of associative memory and obtain the same behaviour of the performance. We summarily conclude that sparse recurrent networks (associative memories in general) have good performance provided the interconnection graph has degree $\Omega$ (log n), and sometime sparsity can become boon, i.e., taking advantage of sparsity we can construct smart codes to have exponential increase in efficiency of usage of a neuron.

Subject Area

Electrical engineering

Recommended Citation

Biswas, Sanjay, "A performance analysis of sparse neural associative memory" (1993). Dissertations available from ProQuest. AAI9321357.
https://repository.upenn.edu/dissertations/AAI9321357

Share

COinS