The classical learning technique such as the back-propagation algorithm minimizes the expectation of the squared error that arise between the actual output and the desired output of supervised neural networks. The network trained by such a technique, however, does not behave in the desired way, when it is embedded in the system that deals with non-Gaussian signals. As the least absolute estimation is known to be robust for noisy signals or a certain type of non-Gaussian signals, the network trained with this criterion might be less sensitive to the type of signals. This paper discusses the least absolute error criterion for the error minimization in supervised neural networks. We especially pay attention to its efficiency for the linear prediction of speech. The computational loads of the conventional approaches to this estimation have been much heavier than the usual least squares estimator. But the proposed approach can significantly improve the analysis performance, since the method is based on the simple gradient descent algorithm.
|Number of pages||4|
|Publication status||Published - 1 Dec 1996|
|Event||Proceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4) - Philadelphia, PA, USA|
Duration: 3 Oct 1996 → 6 Oct 1996
|Conference||Proceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4)|
|City||Philadelphia, PA, USA|
|Period||3/10/96 → 6/10/96|