patternMinor
How are two layer feed-forward neural networks universal?
Viewed 0 times
forwardlayerareneuraluniversalnetworksfeedtwohow
Problem
Across my studies I have noticed the following statement in my Subject Guide; namely, that two-layer feed-forward neural networks using the sigmoidal activation function are universal. My question is how are the networks 'universal' and what does 'universal' actually mean in this instance?
Thanks in advance.
Thanks in advance.
Solution
I think they are talking about the universal approximation theorem which states that given a continuous function $f$ over an n-dimensional input vector $\vec{x}$, then a neural network with a single hidden layer can approximate $f(\vec{x})$ arbitrarily closely.
Context
StackExchange Computer Science Q#11586, answer score: 4
Revisions (0)
No revisions yet.