Paper
6 April 1995 Jeffreys' prior for layered neural networks
Yoichi Motomura
Author Affiliations +
Abstract
In this paper, Jeffreys' prior for neural networks is discussed in the framework of the Bayesian statistics. For a good performance of generalization, the regularization methods which reduce both cost function and regularization term are commonly used. In the Bayesian statistics, the regularization term can be naturally derived from prior distribution of parameters. Jeffreys' prior is known as a typical non-informative objective prior. In the case of neural networks, however, it is not easy to express Jeffreys' prior as a simple function of parameters. In this paper, some numerical analysis of Jeffreys' prior for neural networks is given. The approximation of Jeffreys' prior is given from a parameter transformation getting to make Jeffreys' prior as a simple function. Some learning techniques are also discussed as applications of these results.
© (1995) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yoichi Motomura "Jeffreys' prior for layered neural networks", Proc. SPIE 2492, Applications and Science of Artificial Neural Networks, (6 April 1995); https://doi.org/10.1117/12.205194
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Numerical analysis

Data modeling

Information science

Bayesian inference

Error analysis

Statistical analysis

RELATED CONTENT


Back to Top