patterncppMinor
C++ Feed-Forward Neural Network
Viewed 0 times
neuralforwardnetworkfeed
Problem
After a few days of reading articles, watching videos and bugging my head around neural networks, I have finally managed to understand it just so I could write my own feed-forward implementation in C++.
It does have some scratch back-propagation functionality, but it needs further work (not done yet).
Here's my code, I would like you to point out any bad practices, tips, you know :)
main.cpp
neural-net.hpp
neural-net.cpp
```
#include "neural-net.hpp"
NeuralNet::NeuralNet(const std::vector &vecTopology, bool useBias)
{
this->useBias = useBias;
/ 'Build' the network based on the topology /
for (unsigned l = 0; l vecLayers.push_back(Layer());
unsigned nAxons = (l == vecTopology.size() - 1) ? 0 : vecTopology[l + 1];
for (unsigned n = 0; n useBias) ?
It does have some scratch back-propagation functionality, but it needs further work (not done yet).
Here's my code, I would like you to point out any bad practices, tips, you know :)
main.cpp
#include "neural-net.hpp"
int main(int argc, char **argv)
{
srand(time(NULL));
/* Topology: x-y-z-...-n where a is the input layer and n is the output layer */
/* In this case: 1 input layer with 2 neurons, 1 hidden layer with 3 neurons and an output layer with 1 neuron */
std::vector vecTopology = { 2, 3, 1 };
NeuralNet net(vecTopology, false);
/* Set the input values and expected results for back-propagation (not finished!) */
std::vector vecInputs(vecTopology[0], 1);
std::vector vecExpected(3, 0);
std::cout << "Inputs: ";
for (int i = 0; i < vecInputs.size(); i++) {
std::cout << vecInputs[i] << " ";
}
std::cout << "\n\n";
net.feedForward(vecInputs);
net.backPropagate(vecExpected);
net.status();
}neural-net.hpp
#ifndef NEURALNET_HPP
#define NEURALNET_HPP
#include "Neuron.hpp"
class NeuralNet
{
public:
NeuralNet(const std::vector &, bool = false);
void status();
void setWeight(unsigned, unsigned, unsigned, double);
void feedForward(const std::vector &);
void backPropagate(const std::vector &);
std::vector getOutput();
private:
std::vector vecLayers;
bool useBias;
};
#endifneural-net.cpp
```
#include "neural-net.hpp"
NeuralNet::NeuralNet(const std::vector &vecTopology, bool useBias)
{
this->useBias = useBias;
/ 'Build' the network based on the topology /
for (unsigned l = 0; l vecLayers.push_back(Layer());
unsigned nAxons = (l == vecTopology.size() - 1) ? 0 : vecTopology[l + 1];
for (unsigned n = 0; n useBias) ?
Solution
So here my 2 cents.
-
I think all the
-
The
-
Even if you do not go for that, you should alwas reserve memory if you know the size of an array beforehand. Therewith you avoid reallocations.
-
As you use C++ you should utilize the random library rather than rand. When you want to develop serious models rand is not your friend.
-
You should try to use range based loops, which improve readability. For example
can be written as
-
Whenever possible avoid
-
I think all the
this-> makes it hard to read. Also this is only necessary when there might be name conflicts, which are not there.-
The
randomWeight function should be a member of the Axon. Therefore you can initialize the axon vector simply byvecAxons = std::vector(nAxon);-
Even if you do not go for that, you should alwas reserve memory if you know the size of an array beforehand. Therewith you avoid reallocations.
vecAxons.reserve(nAxons);-
As you use C++ you should utilize the random library rather than rand. When you want to develop serious models rand is not your friend.
-
You should try to use range based loops, which improve readability. For example
/* Calculate the sum of inputs * weights going to the neuron and pass it through the transfer function... */
for (unsigned n = 0; n outputSum += vecPreviousLayer[n].output * vecPreviousLayer[n].vecAxons[this->index].weight;
}can be written as
/* Calculate the sum of inputs * weights going to the neuron and pass it through the transfer function... */
for (const Neuron& sourceNeuron : vecPreviousLayer) {
outputSum += sourceNeuron.output * sourceNeuron.vecAxons[index].weight;
}-
Whenever possible avoid
pow() of natural numbers. I know it is tedious, but pow is just incredibly slow, especially stuff like pow(x,2) vs x*xCode Snippets
vecAxons = std::vector<Axon>(nAxon);vecAxons.reserve(nAxons);/* Calculate the sum of inputs * weights going to the neuron and pass it through the transfer function... */
for (unsigned n = 0; n < vecPreviousLayer.size(); n++) {
this->outputSum += vecPreviousLayer[n].output * vecPreviousLayer[n].vecAxons[this->index].weight;
}/* Calculate the sum of inputs * weights going to the neuron and pass it through the transfer function... */
for (const Neuron& sourceNeuron : vecPreviousLayer) {
outputSum += sourceNeuron.output * sourceNeuron.vecAxons[index].weight;
}Context
StackExchange Code Review Q#158670, answer score: 4
Revisions (0)
No revisions yet.