Assignment 4. Many biological sytems and important engineering systems can not be expressed as feed-forward networks or simple Elman or Jordan nets. The Tlearn simulator is not designed for these "structured" connectionist nets, but will allow us to experiment with some simple cases. For this exercise, all of the weights will be fixed at the time of specification; you will be designing networks with the required properties. Tlearn 's method for specifying fixed links is a bit clumsy, but adequate and is described in the manual. It turns out that you will need to use one trivial training run to get Tlearn to initialize your weights properly. Then you can test the network's behavior.

a) The first task is to build a simple winner-take-all (WTA) network using linear Tlearn nodes. Your net should have input nodes i1,i2 hidden nodes 1,2 and output nodes O3, O4. The goal of the network is to amplify differences between i1 and i2 so that if i1>12 then after some cycles, O3 >> O4, which is the WTA behavior. The basic idea is to have mutual inhibition, negative weighted links from 1 -> 2 and from 2 -> 1. By symmetry, the size of these two weights should be the same. It turns out that Tlearn preserves unit activations over test runs, so you can test your network by running it on ~10 examples all with the same input values. If your design is right, this should be enough to amplify even small differences between i1 and i2. Tlearn evaluates activations in node-number order as we learned in the last problem. How does this bias your WTA network? Of course, biological neurons fire in parallel and there are simulators that work hard to capture those effects, but these are more complex.

b) The other part of this assignment is to hand-build a network that captures the Necker Cube behavior from the first lecture. Recall that the Necker figure is a wire-frame cube with 8 vertices. Your net should have two output units corresponding to the two stable states. For each vertex, you should have a WTA pair of units. There should be positive connections between units in each that are part of the same view. You should be able to get by with just two input units to bias the network. Again because activations are preserved over test runs, a small number of test cases should suffice. It is worth some thought on your numbering scheme to fit best with Tlearn's layout and evaluation schemes.

c) Why does the Necker network converge faster than the WTA in part (a)?

This assignment is due in class on September 25, or by earlier
electronic submission.