This Demonstration shows how a neural-network key exchange protocol for encrypted communication works using the Hebbian learning rule. The idea is: the person A wants to communicate with the person B, but they cannot exchange a key through a secure channel, so they set two topologically identical neural networks and evaluate them with the same inputs until the weights of their respective networks match.

The "epoch" slider moves the system in time through the trained epochs while the "randomize" button creates a new configuration of the network. The system is said to be "paired" when the weights of both networks match. These networks are trained through 1000 epochs only and they may get stuck in a local minima state, so they may never come to a paired state.

A and B set two neural networks with the same size and different random weights values with the property , where is the number of possible values the weight can take.

The algorithm is:

1. The input of the network is randomized with values .

2. Compute the value of the hidden neurons according to .

3. Compute the value of the output neuron .

4. Compare the outputs of both networks. If the outputs do not match, return to step 1. If they do match, update the weights of the network according to one of the following rules:

The Hebbian Learning Rule (used here):

The Anti-Hebbian Learning Rule:

Random walk:

5. Repeat the process until the weights of both neural networks are equal. The paired key is the value of the weights of the networks.