Building Blocks of a Deep Neural Network (C1W4L05)

# Implementing Deep Neural Networks: A Step-by-Step Guide

In the earlier videos from this week as well as from the videos from the past several weeks, you've already seen the basic building blocks of forward propagation and backpropagation—the key components you need to implement a deep neural network. Now, let's explore how you can put these components together to build your deep net.

## Understanding Layer Computations

Let's start by focusing on one layer at a time. For layer **L**, you have some parameters **WL** (weight matrix) and **BL** (bias vector). During the forward propagation step, you input the activations **a(L-1)** from the previous layer and output **aL**. The computation for this is straightforward:

1. Compute **ZL = WL × a(L-1) + BL**, where **×** denotes matrix multiplication.

2. Apply an activation function **G(ZL)** to obtain **aL**.

This process shows how you transition from the input activations **a(L-1)** to the output activations **aL**. It turns out that storing the value of **ZL** (the pre-activation) is also useful for later computations during backpropagation. Therefore, we'll cache this value as part of our forward step.

## The Backward Propagation Step

For the backward propagation step, focusing on layer **L**, you need to implement a function that takes the derivative of the loss with respect to **aL** (denoted as **daL**) and computes the derivative of the loss with respect to **a(L-1)** (denoted as **da(L-1)**).

The input to this backward function is actually **daL** along with the cache, which contains **ZL**. Using these values, you can compute the gradients needed for learning. Specifically, this backward function will output not only **da(L-1)** but also the gradients of the loss with respect to **WL** and **BL**, denoted as **dWL** and **DBL**, respectively.

These computations are typically represented using red arrows to denote the flow of gradients during backpropagation. If you can implement these two functions—forward and backward—you have the basic computation for a neural network layer.

## The Training Process

Now, let's consider the entire network. Starting with the input features **a(0)** (which is your input data **X**), you compute the activations of the first layer **a1**, using **W1** and **B1**. Along the way, you cache **Z1** for later use in backpropagation.

This process repeats for each subsequent layer:

- Using **W2** and **B2**, compute **a2** from **a1**.

- Cache **Z2**.

- Continue this until you reach the final layer, which outputs **aL = Ŷ** (your predicted values).

This concludes the forward propagation step.

For the backward propagation step, you start with the derivative of the loss with respect to **aL** (**daL**) and propagate these gradients backward through the network:

- Compute **da(L-1)** from **daL**.

- Continue this process until you reach **da0**, which represents the derivative of the loss with respect to the input features.

Along the way, the backward functions also compute the gradients **dWL** and **DBL** for each layer. These gradients are essential for updating the weights and biases during gradient descent.

## Implementation Details

Conceptually, it's useful to think of the cache as storing the values of **Z** (the pre-activations) for each layer. However, when you implement this in code, you'll find that the cache also serves as a convenient way to store the weights **W** and biases **B** used in each forward pass. This ensures that these parameters are readily available during the backward propagation step when needed for computing gradients.

In practice, you may choose to store additional information in the cache, such as the activations **aL**, depending on your specific implementation needs. For example, storing **Z2** and **W2** in the cache allows you to easily access them when computing gradients for layer 2 during backpropagation.

## Conclusion

By implementing these forward and backward functions for each layer, you've created a basic yet powerful framework for training a deep neural network. Each layer's computations are modular, making it easy to scale the network by adding more layers as needed.

In the next video, we'll dive deeper into how to implement these building blocks in code, providing practical insights and tips for successful implementation. Stay tuned!

"WEBVTTKind: captionsLanguage: enin the earlier videos from this week as well as from the videos from the past several weeks you've already seen the basic building blocks of board propagation and back propagation the key components you need to implement a deep neural network let's see how you can put these components together to build your deep net use the network with a few layers let's pick one layer and look at the computations focusing on just that layer for now so for layer L you have some parameters WL and Bo and for the forward prop you will input the activations a L minus 1 from the previous layer and output Al so the way we did this previously was you compute Z l equals WL x al minus 1 plus BL um and then al equals G of ZL right so that's how you go from the input al minus 1 to the output al and it turns out that for later use will be useful to also cache the value ZL so let me include this on cache as well because storing the value ZL will be useful for backward for the back propagation step later and then for the backward step or 3 for the back propagation step again focusing on computation for this layer L you're going to implement a function that inputs da of L and output da L minus 1 and just the special the details the input is actually da FL as well as the cache so you have available to you the value of ZL that you compute it and in addition to outputting GL minus 1 you will output the gradients you want in order to implement gradient descent for learning okay so this is the basic structure of how you implement this forward step I'm going to call the forward function as well as this backward step using a callback wave function so just to summarize in layer L you're going to have you know the forward step or the forward property' forward function input 800 minus 1 and output Al and in order to make this computation you need to use wo and BL um and also output a cache which contains ZL and then on the backward function using the back prop step will be another function then now inputs the AFL and outputs da l minus 1 so it tells you given the derivatives respect to these activations that's da FL how what are the derivatives or how much do I wish you know al minus 1 changes to compute the derivatives respect to the activations from the previous layer within this box ready need to use WL and BL and it turns out along the way you end up computing VL and then this false this backward function can also output dwl and DB l but now sometimes using red arrows to denote the backward elevation so if you prefer we could draw these arrows in red so if you can implement these two functions then the basic computation of the neural network will be as follows you're going to take the input features a zero see that in and that will compute the activations of the first layer let's call that a 1 and to do that you needed W 1 and B 1 and then we'll also you know cache away v1 now having done that you feed that this is the second layer and then using W 2 and B 2 you're going to compute the activations our next layer a 2 and so on until eventually you end up outputting a capital L which is equal to Y hat and along the way we cashed all of these on values Z so that's the forward propagation step now for the back propagation step what we're going to do will be a backward sequence of iterations in which you're going backwards and computing gradients like so so as you're going to feed in here da L and then this box will give us da L minus 1 and so on until we get da - da 1 you could actually get one more output to compute da 0 but this is derivative respect your input features which is not useful at least for training the weights of these are supervised neural networks so you could just stop it there belong the way back prop also ends up outputting PWL DB l right this used times with wo and BL this would output d w3 t p3 and so on so you end up computing all the derivatives you need and so just a maybe so in the structure of this a little bit more right these boxes will use those parameters as well wo PL and it turns out that we'll see later that inside these boxes we'll end up computing disease as well so one iteration of training for a new network involves starting with a zero which is X and going through for profit as follows computing y hats and then using that to compute this and then back prop right doing that and now you have all these derivative terms and so you know W will get updated as some W minus the learning rate times DW right for each of the layers and similarly for B right now we've compute the back prop and have all these derivatives so that's one iteration of gradient descent for your neural network now before moving on just one more implementational detail conceptually will be useful to think of the cashier as storing the value of Z for the backward functions but when you implement this you see this in the programming exercise when you implement it you find that the cash may be a convenient way to get this value of the parameters at W 1 V 1 into the backward function as well so the program exercise you actually spawn the cash is Z as well as W and B all right so to store z2w to be 2 but from an implementational standpoint i just find this a convenient way to just get the parameters copied to where you need to need to use them later when you're computing back propagation so that's just an implementational detail that you see when you do the programming exercise so you've now seen one of the basic building blocks for implementing the deep neural network in each layer there's a for propagation step and there's a corresponding backward propagation step and there's a cash deposit information from one to the other in the next video we'll talk about how you can actually implement these building blocks let's go on to the next videoin the earlier videos from this week as well as from the videos from the past several weeks you've already seen the basic building blocks of board propagation and back propagation the key components you need to implement a deep neural network let's see how you can put these components together to build your deep net use the network with a few layers let's pick one layer and look at the computations focusing on just that layer for now so for layer L you have some parameters WL and Bo and for the forward prop you will input the activations a L minus 1 from the previous layer and output Al so the way we did this previously was you compute Z l equals WL x al minus 1 plus BL um and then al equals G of ZL right so that's how you go from the input al minus 1 to the output al and it turns out that for later use will be useful to also cache the value ZL so let me include this on cache as well because storing the value ZL will be useful for backward for the back propagation step later and then for the backward step or 3 for the back propagation step again focusing on computation for this layer L you're going to implement a function that inputs da of L and output da L minus 1 and just the special the details the input is actually da FL as well as the cache so you have available to you the value of ZL that you compute it and in addition to outputting GL minus 1 you will output the gradients you want in order to implement gradient descent for learning okay so this is the basic structure of how you implement this forward step I'm going to call the forward function as well as this backward step using a callback wave function so just to summarize in layer L you're going to have you know the forward step or the forward property' forward function input 800 minus 1 and output Al and in order to make this computation you need to use wo and BL um and also output a cache which contains ZL and then on the backward function using the back prop step will be another function then now inputs the AFL and outputs da l minus 1 so it tells you given the derivatives respect to these activations that's da FL how what are the derivatives or how much do I wish you know al minus 1 changes to compute the derivatives respect to the activations from the previous layer within this box ready need to use WL and BL and it turns out along the way you end up computing VL and then this false this backward function can also output dwl and DB l but now sometimes using red arrows to denote the backward elevation so if you prefer we could draw these arrows in red so if you can implement these two functions then the basic computation of the neural network will be as follows you're going to take the input features a zero see that in and that will compute the activations of the first layer let's call that a 1 and to do that you needed W 1 and B 1 and then we'll also you know cache away v1 now having done that you feed that this is the second layer and then using W 2 and B 2 you're going to compute the activations our next layer a 2 and so on until eventually you end up outputting a capital L which is equal to Y hat and along the way we cashed all of these on values Z so that's the forward propagation step now for the back propagation step what we're going to do will be a backward sequence of iterations in which you're going backwards and computing gradients like so so as you're going to feed in here da L and then this box will give us da L minus 1 and so on until we get da - da 1 you could actually get one more output to compute da 0 but this is derivative respect your input features which is not useful at least for training the weights of these are supervised neural networks so you could just stop it there belong the way back prop also ends up outputting PWL DB l right this used times with wo and BL this would output d w3 t p3 and so on so you end up computing all the derivatives you need and so just a maybe so in the structure of this a little bit more right these boxes will use those parameters as well wo PL and it turns out that we'll see later that inside these boxes we'll end up computing disease as well so one iteration of training for a new network involves starting with a zero which is X and going through for profit as follows computing y hats and then using that to compute this and then back prop right doing that and now you have all these derivative terms and so you know W will get updated as some W minus the learning rate times DW right for each of the layers and similarly for B right now we've compute the back prop and have all these derivatives so that's one iteration of gradient descent for your neural network now before moving on just one more implementational detail conceptually will be useful to think of the cashier as storing the value of Z for the backward functions but when you implement this you see this in the programming exercise when you implement it you find that the cash may be a convenient way to get this value of the parameters at W 1 V 1 into the backward function as well so the program exercise you actually spawn the cash is Z as well as W and B all right so to store z2w to be 2 but from an implementational standpoint i just find this a convenient way to just get the parameters copied to where you need to need to use them later when you're computing back propagation so that's just an implementational detail that you see when you do the programming exercise so you've now seen one of the basic building blocks for implementing the deep neural network in each layer there's a for propagation step and there's a corresponding backward propagation step and there's a cash deposit information from one to the other in the next video we'll talk about how you can actually implement these building blocks let's go on to the next video\n"