Through the training process, the weights in the network may be thought to minimize an energy function and slide down an energy surface. In a trained network, each pattern presented to the network provides an attractor, where progress is made towards the point of attraction by propagating information around the network.

The information processing objective of the system is to associate the components of an input pattern with a holistic representation of the pattern called Content Addressable Memory (CAM). This means that once trained, the system will recall whole patterns, given a portion or a noisy version of the input pattern.

The Hopfield Network is comprised of a graph data structure with weighted edges and separate procedures for training and applying the structure. The network structure is fully connected (a node connects to all other nodes except itself) and the edges (weights) between the nodes are bidirectional.

The weights of the network can be learned via a one-shot method (one-iteration through the patterns) if all patterns to be memorized by the network are known. Alternatively, the weights can be updated incrementally using the Hebb rule where weights are increased or decreased based on the difference between the actual and the expected output. The one-shot calculation of the network weights for a single node occurs as follows:

where w_i,j is the weight between neuron i and j, N is the number of input patterns, v is the input pattern and v_ik is the i-th attribute on the k-th input pattern.

The propagation of the information through the network can be asynchronous where a random node is selected each iteration, or synchronously, where the output is calculated for each node before being applied to the whole network. Propagation of the information continues until no more changes are made or until a maximum number of iterations has completed, after which the output pattern from the network can be read. The activation for a single node is calculated as follows:

where n_i is the activation of the i-th neuron, w_i,j with the weight between the nodes i and j, and n_j is the output of the j-th neuron. The activation is transferred into an output using a transfer function, typically a step function as follows:

where the threshold θ is typically fixed at 0.

The Hopfield network may be used to solve the recall problem of matching cues for an input pattern to an associated pre-learned pattern.

The transfer function for turning the activation of a neuron into an output is typically a step function f(a) in {-1, 1} (preferred), or more traditionally f(a) in {0, 1}. The input vectors are typically normalized to boolean values x in [-1; 1].

The network can be propagated asynchronously (where a random node is selected and output generated), or synchronously (where the output for all nodes are calculated before being applied).

Weights can be learned in a one-shot or incremental method based on how much information is known about the patterns to be learned. All neurons in the network are typically both input and output neurons, although other network topologies have been investigated (such as the designation of input and output neurons). A Hopfield network has limits on the patterns it can store and retrieve accurately from memory, described by N < 0,15*n where N is the number of patterns that can be stored and retrieved and n is the number of nodes in the network.