Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)
Zoran Obradovic, Ian Parberry
Experimental evidence has shown analog neural networks to be ex(cid:173) ~mely fault-tolerant; in particular. their performance does not ap(cid:173) pear to be significantly impaired when precision is limited. Analog neurons with limited precision essentially compute k-ary weighted multilinear threshold functions. which divide R" into k regions with k-l hyperplanes. The behaviour of k-ary neural networks is investi(cid:173) gated. There is no canonical set of threshold values for k>3. although they exist for binary and ternary neural networks. The weights can be made integers of only 0 «z +k ) log (z +k » bits. where z is the number of processors. without increasing hardware or run(cid:173) ning time. The weights can be made ±1 while increasing running time by a constant multiple and hardware by a small polynomial in z and k. Binary neurons can be used if the running time is allowed to increase by a larger constant multiple and the hardware is allowed to increase by a slightly larger polynomial in z and k. Any symmetric k-ary function can be computed in constant depth and size o (n k- 1/(k-2)!). and any k-ary function can be computed in constant depth and size 0 (nk"). The alternating neural networks of Olafsson and Abu-Mostafa. and the quantized neural networks of Fleisher are closely related to this model.
Analog Neural Networks of Limited Precision I
703