Neural networks learning with L1 criteria and its efficiency in linear prediction of speech signals

Munehiro Namba, Hiroyuki Kamata, Yoshihisa Ishida

Research output: Contribution to conferencePaper

2 Citations (Scopus)

Abstract

The classical learning technique such as the back-propagation algorithm minimizes the expectation of the squared error that arise between the actual output and the desired output of supervised neural networks. The network trained by such a technique, however, does not behave in the desired way, when it is embedded in the system that deals with non-Gaussian signals. As the least absolute estimation is known to be robust for noisy signals or a certain type of non-Gaussian signals, the network trained with this criterion might be less sensitive to the type of signals. This paper discusses the least absolute error criterion for the error minimization in supervised neural networks. We especially pay attention to its efficiency for the linear prediction of speech. The computational loads of the conventional approaches to this estimation have been much heavier than the usual least squares estimator. But the proposed approach can significantly improve the analysis performance, since the method is based on the simple gradient descent algorithm.

Original languageEnglish
Pages1245-1248
Number of pages4
Publication statusPublished - 1 Dec 1996
EventProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4) - Philadelphia, PA, USA
Duration: 3 Oct 19966 Oct 1996

Conference

ConferenceProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4)
CityPhiladelphia, PA, USA
Period3/10/966/10/96

    Fingerprint

Cite this

Namba, M., Kamata, H., & Ishida, Y. (1996). Neural networks learning with L1 criteria and its efficiency in linear prediction of speech signals. 1245-1248. Paper presented at Proceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4), Philadelphia, PA, USA, .