Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)
Fu-Sheng Tsung, Garrison Cottrell
Existing recurrent net learning algorithms are inadequate. We in(cid:173) troduce the conceptual framework of viewing recurrent training as matching vector fields of dynamical systems in phase space. Phase(cid:173) space reconstruction techniques make the hidden states explicit, reducing temporal learning to a feed-forward problem. In short, we propose viewing iterated prediction [LF88] as the best way of training recurrent networks on deterministic signals. Using this framework, we can train multiple trajectories, insure their stabil(cid:173) ity, and design arbitrary dynamical systems.