Activating the Environment
To start using the environment set up in the previous video, we need to type "source activate" followed by the name of the environment, which was "NN series" in this case. The command is then followed by hitting return.
Opening Jupiter Notebook and Creating a New Python Environment
We are going to create a new notebook on our desktop and open it up to create a new Python environment called "NN". In Jupiter notebook, we can do this by typing "new Python - NN" and hitting enter. This will allow us to use the functions and libraries available in the "NN" environment.
Importing Necessary Libraries
Before we can define our neural network as a function, we need to import the necessary library, which is NumPy. We can do this by typing "import numpy" and hitting return.
Defining the Neural Network Function
Now that we have imported the necessary library, we can define our neural network function. The function will take two measurements, m1 and m2, and weight them with two weights, w1 and w2, as well as a bias number. The function will then calculate an intermediate value called Z by multiplying m1 with w1, adding m2 times w2, plus the bias.
The function will then output a number between 0 and 1 using the sigmoid function, which is not a built-in function in NumPy. We need to define the sigmoid function ourselves before we can use it in our neural network. The sigmoid function takes one input X and returns 1 divided by 1 plus the exponential of negative X.
Defining the Sigmoid Function
To define the sigmoid function, we need to type "def sigmoid(X): return 1 / (1 + numpy.exp(-X))" and hit return. This will allow us to use the sigmoid function in our neural network.
Running the Neural Network Function with Random Connections
Now that we have defined both the neural network function and the sigmoid function, we can run the neural network function using random connections for W1 and W2, as well as a bias value. We can do this by typing "W1 = numpy.random.rand(); W2 = numpy.random.rand(); B = numpy.random.rand()" and hitting enter.
Running the Neural Network Function with Specific Measurements
To test our neural network, we need to provide it with specific measurements for m1 and m2, as well as the random parameters W1, W2, and bias. We can do this by typing "m1 = 3; m2 = 1.5" and hitting enter, followed by "nn.m1 = m1; nn.W1 = W1; nn.W2 = W2; nn.B = B". Then we can run the function by typing "print(nn.predict(m1, m2))".
Interpreting the Results
The output of our neural network function is a number between 0 and 1. To interpret this result, we need to consider what the measurements m1 and m2 represent. In this case, they are from a flower, with m1 representing the length of the stem and m2 representing the diameter of the flower.
The computer is interpreting these measurements as input values for its neural network function. The output number is a probability value that the flower is red or blue, based on the inputs provided to the neural network function.
Predictions from the Neural Network
To further test our neural network, we can run it with different measurements and see what predictions it makes. We can do this by typing "m1 = 3; m2 = 8" and hitting enter, followed by "print(nn.predict(m1, m2))". The output is a number between 0 and 1 that the computer interprets as a probability value for the flower being red or blue.
The predictions from our neural network are completely random at this point, which is not surprising since we have randomly generated the connections W1 and W2. In the next video, we will take a step back and look at something called a cost function and a bit of calculus to get us moving in the right direction to find the right set of parameters for our neural network.