HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Neural Network: Why can't we calculate derivatives during forward prop itself?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
itselfforwardwhycanpropneuralduringderivativescalculatenetwork

Problem

I have been studying Andrew Ng's Coursera course about Deep Learning. In that he mentions that we calculate the activation functions during the forward pass, and the derivatives $\dfrac{dL}{dz} \ $, $\dfrac{dL}{dw} \ $, and $\dfrac{dL}{db} \ $ during the backward pass.

Consider a single neural node

Here,
$x_{1} \ ,\ x_{2} \ ,\ ...\ ,\ x_{n}$ are our inputs to the node

$b$ is the bias

$w_{1} ,\ w_{2} ,\ ...\ ,\ w_{n}$ are the weights associated with the inputs

$\hat{y}$ or $a$ is our activation function where $a\ =\ \sigma ( z) \ =\frac{1}{1\ +\ e^{-z}} \ $

$z\ =\ w_{1} x_{1} +w_{2} x_{2} +\dots +w_{n} x_{n} +\ b$

The term $L\ ( a,y) \ =\ -\{\ y\log a\ +\ ( 1-y)\log( 1-a) \ \}$ is our loss function

The way Andrew calculates the derivatives is as follows:

$ \begin{gathered}
\dfrac{dL}{dw} \ =\ \dfrac{dL}{dz} \cdot \dfrac{dz}{dw}\\
\\
\dfrac{dL}{dz} \ =\ \dfrac{dL}{da} \cdot \dfrac{da}{dz}\\
\end{gathered}$

We back-propagate to calculate

$\dfrac{dL}{da}\\$ (1-step back)

$\dfrac{da}{dz}\\$ and $\dfrac{dL}{dz} \ =\ \dfrac{dL}{da} \cdot \dfrac{da}{dz}\\$ (2-step back)

$\dfrac{dz}{dw}\\$ and $\dfrac{dL}{dw} \ =\ \dfrac{dL}{dz} \cdot \dfrac{dz}{dw}\\$ (3-step back)

However, my question is, why can't we calculate these derivatives during the forward pass itself? Like so,

$\dfrac{dL}{dw} \ =\ \dfrac{dL}{da} \cdot \dfrac{da}{dz} \cdot \dfrac{dz}{dw}\\$

Step 1: Calculate $z\\$ as well as $\dfrac{dz}{dw}\\$

Step 2: Calculate $a\\$ as well as $\dfrac{da}{dz}\\$ and $\dfrac{da}{dw} \ =\ \dfrac{da}{dz} \cdot \dfrac{dz}{dw}\\$

Step 3: Calculate $L\\$ as well as $\dfrac{dL}{da}\\$ and $\dfrac{dL}{dw} \ =\ \dfrac{dL}{da} \cdot \dfrac{da}{dw}\\$

Is there any reason why we take the back-prop approach, and not calculate both sets of functions together during forward prop?

EDIT: For a network with $n$ layers, we can calculate the weight derivative for a layer $j$, as:

$ \begin{gathered}
\dfrac{dL}{dw^{[ j]}} =\dfrac{dL}{da^{[ n]}} \cdot \dfrac{da^{[ n]}}{dz^{[ n]}} \

Solution

You described the simplest case of the neural network, where the center neuron has only one output $a$, which is connected to the final loss function. In general, there can be several outputs and the total error signal coming to this node equals the sum over all output connections. Plus the error messages across these connections are different. And the subsequent nodes can have multiple outputs as well -- in general, it's impossible to tell what the result sum will be until the forward pass reaches the loss function.

Here's a picture for two or more output connections:

... or to make it even more complicated...

That's why you can't just multiply $\frac{da}{dz} \cdot \frac{dz}{dw}$, because the total error signal $\delta$ may not be $\frac{da}{dz}$ or something multiplied by $\frac{da}{dz}$. What is known for sure is the local gradient $\frac{dz}{dw}$. But in order to get $\delta$, all of the later nodes must be processed in the reverse order.

Credit: the pictures are from CS224n class by Stanford.

Context

StackExchange Computer Science Q#85791, answer score: 6

Revisions (0)

No revisions yet.