Neural Network From Scratch with C++ — Part 3

banner

Photo by Alexa on Pixabay

After implementing feedforward and backpropagation in the last part, we can finally put the last piece of the puzzle together to complete our mini neural network.

Weight and Bias Adjustment

So, after the backpropagation process, we still need to adjust the weight and bias to ensure the model learns effectively from the data. This final step will fine-tune the parameters, improving the network’s accuracy and performance.

The steps to adjust weight:

  1. Multiply the Node, delta together with learningRate
  2. Add the node weight with the result
  3. Repeat the process for all node

The steps to adjust bias:

  1. Perform column-wise addition to delta
  2. Multiply the result with learningRate
  3. Add the node bias with the result
  4. Repeat the process for all node

This is what the code would look like:

void Neuron::linkAdjustment(std::vector<Cell> &isolatedCell) {
    Link weight                             = (this->learningAxon->getInputNode().transpose() * isolatedCell.at(0).delta) * this->learningRate;
    isolatedCell.at(0).weight   = isolatedCell.at(0).weight + weight;

    Link bias                                 = isolatedCell.at(0).delta.colwise().sum() * this->learningRate;
    isolatedCell.at(0).bias        = isolatedCell.at(0).bias + bias;

    for (int cellIndex = 1; cellIndex < isolatedCell.size(); cellIndex++) {
        Link weight                                          = (isolatedCell.at(cellIndex - 1).node.transpose() * isolatedCell.at(cellIndex).delta) * this->learningRate;
        isolatedCell.at(cellIndex).weight   = isolatedCell.at(cellIndex).weight + weight;

        Link bias                                               = isolatedCell.at(cellIndex).delta.colwise().sum() * this->learningRate;
        isolatedCell.at(cellIndex).bias       = isolatedCell.at(cellIndex).bias + bias;
    }
}

Putting It All Together

Now that we have all the pieces in place, we just need to put it all together and loop the entire process until we achieve the best result for our network. The number of loops we want to perform is defined by the epoch hyperparameter.

void Neuron::learn() {
    for (int i = 0; i < this->epoch; i++) {
        this->feedForward(this->learningAxon->getInputNode(), isolatedCell);
        this->backProp(isolatedCell);

        this->linkAdjustment(isolatedCell);
    }

    this->learningAxon->updateIsolatedCells(isolatedCell);
}

Test

At this point, we’re actually done, but to prove our neural network works, we need to test it. To do this, we simply take a sample chunk of the dataset and run it through our network model, then compare it with the actual result of the dataset, which is the output node.

int main(int argc, char *argv[]) {
    try {
        #pragma region learningPhase
            Neuron learningNeuron(5000, 0.3);
            Axon learningAxon(4, 3);

            learningAxon.seedData("./src/resources/train.csv");
            learningAxon.addCell(5);

            learningNeuron.attach(learningAxon);
            learningNeuron.learn();
        #pragma endregion

        #pragma region guessingPhase
            Axon guessAxon(4, 3);
            guessAxon.seedData("./src/resources/test.csv");

            Neuron guessNeuron;
            guessNeuron.attach(learningAxon);

            Node guessResult    = guessNeuron.guess(guessAxon);
        #pragma endregion

        #pragma region compareResult
            Node endData        = guessAxon.getOutputNode();

            float successResult = 0;
            for (int row = 0; row < endData.rows(); row++) {

                int actualResultIdx = 0;
                for (int col = 0; col < endData.row(row).cols(); col++) {
                    if (endData(row, col) > 0) {
                        actualResultIdx = col;
                    }
                }

                if (guessResult(row, actualResultIdx) == guessResult.row(row).maxCoeff()) {
                    successResult++;
                }
            }

            std::cout << "accuracy: " << (successResult / guessResult.rows()) << std::endl;
        #pragma endregion

        return 0;
    } catch (std::exception& e) {
        std::cerr << e.what();
        return 1;
    }
    
    return 0;
}

Compiling and running the full program will result in an accuracy of 0.96. This high level of accuracy demonstrates that our neural network is functioning effectively, successfully learning from the data, and making accurate predictions.

This confirms that our implementation of feedforward, backpropagation, and the final adjustments to weights and biases has been successful.

I hope you found this helpful and interesting. I’ll include all the references and the GitHub repository here. Thank you for reading!

Goeta

Neural network from scratch with c++

References:

  1. https://datadan.io/blog/neural-net-with-go
  2. https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/
  3. https://www.geeksforgeeks.org/deep-neural-net-with-forward-and-back-propagation-from-scratch-python/?ref=oin_asr2
logo

© 2024 Cameology. All rights reserved.