Instead of just jumping into code - we go through examples of the forward and backpropagation algorithm. These step-by-step examples help build upon the theoretical equations by plugging in numerical values for a range of network configurations.
If you're not sure how to go from the theory to the implementation this provides an additional springboard.
Furthermore, when you implement your test version - if there are any bugs - you can compare the calculated results to confirm the result match.
Example 1 (2-2-2 Network)
Let's go through a step-by-step example of a simple neural network with 2 inputs, 2 hidden neurons, and 2 outputs using the sigmoid activation function.
We'll include biases in our calculations.
1. Define the Network Structure and Initialize Parameters
This completes the backpropagation process for the given neural network. The weights and biases have been updated based on the error gradients calculated during the backward pass.
Example 2 (2-3-2-2 Network)
1. Define the Network Structure and Initialize Parameters
We will use the sigmoid activation function and include biases in our calculations.
- Updated weights and biases for the second hidden layer:
\[ W_2 = \begin{bmatrix} 0.4471 & 0.4970 & 0.5470 \\ 0.5974 & 0.6474 & 0.6974 \end{bmatrix} \]
\[ b_2 = \begin{bmatrix} 0.5951 \\ 0.5957 \end{bmatrix} \]
- Updated weights and biases for the first hidden layer:
\[ W_1 = \begin{bmatrix} 0.1499325 & 0.199865 \\ 0.249930 & 0.299860 \\ 0.3499275 & 0.399855 \end{bmatrix} \]
\[ b_1 = \begin{bmatrix} 0.34865 \\ 0.3486 \\ 0.34855 \end{bmatrix} \]
This completes one full forward and backward pass (backpropagation) with the given example numbers through a neural network with 2 input neurons, 2 hidden layers (with 3 and 2 neurons respectively), and 2 output neurons using the sigmoid activation function.
Visitor:
Copyright (c) 2002-2025 xbdev.net - All rights reserved.
Designated articles, tutorials and software are the property of their respective owners.