/SA true /F4 16 0 R By Christopher Williams. N2 - For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. Journel, A. G. and C. J. Huijbregts (1978).

/Parent 2 0 R Applying the results to learning a teacher defined by a

In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. endobj >> /Parent 2 0 R /Filter /DCTDecode and Jisc. /ProcSet [/PDF /ImageB /Text] /ExtGState 40 0 R
<< /BM /Normal Computing with infinite networks. For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units.

/Type /XObject /Rotate 0

Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. endobj series = "Proceesing of the 1996 conference". https://dl.acm.org/doi/10.5555/2998981.2999023. << 6 0 obj This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones. /ExtGState 31 0 R /LJ 0 >> editor = "Mozer, {M. C.} and Jordan, {M. In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. Using statistical mechanics results, I calculate learning curves (average generalization error) for Gaussian processes (GPs) and Bayesian neural networks (NNs) used for regression.
/ModDate (D:20071104230245) Poggio, T. and F. Girosi (1990). Computing with Finite and Infinite Networks . Request. Advances in Neural Information Processing Systems 9. /Type /Page << >> CORE is a not-for-profit service delivered by

/Contents [48 0 R 49 0 R] /Type /Page /Parent 2 0 R "$"$�� C�� @ @ �� �� 8 !1"AQa2q�#Bb�%r��������� �� ) !1"AQa��2q������ ? >> / Computing with Infinite Networks. /Length 2182 / Williams, Christopher K. I. N1 - Copyright of the Massachusetts Institute of Technology Press (MIT Press). Update/Correction/Removal 3 0 obj endobj

10 0 obj Neural Computing Research Group, Department of Computer Science and Applied Mathematics, Aston University, Birmingham, UK. note = "Copyright of the Massachusetts Institute of Technology Press (MIT Press); 10th Annual Conference on Neural Information Processing Systems, NIPS 1996 ; Conference date: 02-12-1996 Through 05-12-1996". << /F6 18 0 R /ca 1 For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process.

/Contents 27 0 R /ProcSet [/PDF /ImageB /Text] Software. This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.". >> I.} 12 0 obj /MediaBox [0 0 612 792] endobj /Type /Page In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. >> This allows predictions to be made efficiently using networks with an infinite number of hidden units, and shows that, somewhat paradoxically, it may be easier to compute with infinite networks than finite ones.

>> In this paper analytic forms are derived for the covariance function of the Gaussian processes corresponding to networks with sigmoidal and Gaussian hidden units. /Resources <<

>> >> MIT Press, 1997. pp. >> abstract = "For neural networks with a wide class of weight-priors, it can be shown that in the limit of an infinite number of hidden units the prior over functions tends to a Gaussian process.