HiveBrain v1.2.0
Get Started
← Back to all entries
patternpythonMinor

A simple fully connected ANN module

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
simplemoduleannconnectedfully

Problem

I've written a simple module that creates a fully connected neural network of any size. The arguments of the train function are list of tuples with a training example array first and an array containing its class second, list that contains a number of neurons in every layer including the input and the output layers, a learning rate and a number of epochs.
To run a trained network I've written the run function, its arguments are an input and weights from the trained network. Since I'm a beginner to programming and machine learning I'll be very happy getting advice regarding computational efficiency and optimization.

```
import numpy as np
def weights_init(inSize,outSize): #initialize the weights
return 2*np.random.random((inSize,outSize))-1

def Sigmoid(input, weights): #create a sigmoid layer and return a layer along with its derivative
out = 1/(1+np.exp(-np.dot(input,weights)))
derivative = out*(1-out)
return out,derivative

def backProp(layers, weights, deriv, size, rate = 1):
derivative = deriv.pop()#get the cost function derivative
#reverse all the lists because we need to go backwards
deriv = deriv[::-1]
layers = layers[::-1]
weights = weights[::-1]
new_weights=[]
#backpopagate
new_weights.append(weights[0]+(layers[1].T.dot(derivative*rate))) #this one does not fit well the algorithm inside for loop, so it's outside of it
for i in range(len(size)-2):
derivative = derivative.dot(weights[i].T)*deriv[i]
new_weights.append(weights[i+1]+(layers[i+2].T.dot(derivative*rate)))
return new_weights[::-1]

def train(input,size,rate=1,epochs=1): #train the network
layers=[]
weights=[]
derivs=[]
for i in xrange(len(size)-1): #weights initialization
weights.append(weights_init(size[i],size[i+1]))
for i in xrange(epochs): #the training process
for example, target in input: #online learning
layers.append(example)
for i in xrange(len(size)-1):

Solution

Without commenting on the algorithm for a lack of knowledge regarding neural networks, this answer provides some style suggestions. Take the first method:

def weights_init(inSize,outSize): #initialize the weights
    return 2*np.random.random((inSize,outSize))-1


In python it is against the PEP style guide to use camelCase for arguments. That comment should be dropped down a line. Place some spaces in between your arguments and operators so it can breath better:

def weights_init(in_size, out_size): 
    #initialize the weights
    return 2 * np.random.random((in_size, out_size)) - 1


Function names shouldn't be Uppercased.

def sigmoid(input, weights): 
    #create a sigmoid layer and return a layer along with its derivative
    out = 1 / (1 + np.exp(-np.dot(input, weights)))
    derivative = out * (1 - out)
    return out, derivative


Default arguments in the param signature shouldn't have a space between the equals sign.

Also it would be nice to add double line breaks for ease or reading.

def backProp(layers, weights, deriv, size, rate=1):
#get the cost function derivative
derivative = deriv.pop()

#reverse all the lists because we need to go backwards
deriv = deriv[::-1]
layers = layers[::-1]
weights = weights[::-1]
new_weights=[]

#backpopagate
new_weights.append(weights[0] + (layers[1].T.dot(derivative * rate)))

#this one does not fit well the algorithm inside for loop, so it's outside of it
for i in range(len(size) - 2):
derivative = derivative.dot(weights[i].T) * deriv[i]
new_weights.append(weights[i + 1] + (layers[i + 2].T.dot(derivative * rate)))
return new_weights[::-1]

def train(input, size, rate=1, epochs=1):
#train the network
layers=[]
weights=[]
derivs=[]

#weights initialization
for i in xrange(len(size) - 1):
weights.append(weights_init(size[i], size[i + 1]))

#the training process
for i in xrange(epochs):

#online learning
for example, target in input:
layers.append(example)

for i in xrange(len(size) - 1):
#calculate the layer and itd derivative
layer, derivative = sigmoid(layers[i], weights[i])

layers.append(layer)
derivs.append(derivative)

#loss function
loss_deriv = target-layer[-1]

#multiply the loss function by the final layer's derivative
derivs[-1] = loss_deriv * derivs[-1]

#update the weights
weights = backProp(layers, weights, derivs, size)
layers=[]
derivs = []

return weights

Code Snippets

def weights_init(inSize,outSize): #initialize the weights
    return 2*np.random.random((inSize,outSize))-1
def weights_init(in_size, out_size): 
    #initialize the weights
    return 2 * np.random.random((in_size, out_size)) - 1
def sigmoid(input, weights): 
    #create a sigmoid layer and return a layer along with its derivative
    out = 1 / (1 + np.exp(-np.dot(input, weights)))
    derivative = out * (1 - out)
    return out, derivative

Context

StackExchange Code Review Q#141047, answer score: 4

Revisions (0)

No revisions yet.