patternpythonMinor
Long constructors, inheritance, class method constructors
Viewed 0 times
constructorsmethodlonginheritanceclass
Problem
My class takes functions as a parameter. There are 6 of them, along with numerous other parameters. Almost all of the fields have default values.
It also has a class method that creates an instance of the class with 2 of the fields initialized randomly. It has a subclass that cheats a bit by making 2 of the functions passed in as a parameter unnecessary.
All of this has a lot of redundant information. I have found bugs in my code caused by mistakes in the constructor calls because they were so long. I indeed found another while writing this question.
How can I improve this?
ParentClass
```
class NeuralNet:
def __init__(self,
weights,
biases,
learning_rate=0.01,
momentum=0.9, #Set to zero to nut use mementum
post_process=lambda i: i, #Post Process is applied to final output only (Not used when training), but is used when checkign error rate
topErrorFunc=squaredMean, d_topErrorFunc=dSquaredMean,
actFunc=sigmoid, d_actFunc=dSigmoid,
topActFunc = sigmoid, d_topActFunc = dSigmoid):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
self.post_process = post_process
self.topErrorFunc = topErrorFunc
self.d_topErrorFunc = d_topErrorFunc
self.actFunc = actFunc
self.d_actFunc=d_actFunc
self.topActFunc = topActFunc
self.d_topActFunc = d_topActFunc
@classmethod
def random_init(cls,
layer_sizes,
learning_rate=0.01,
momentum=0.9, #Set to zero to nut use mementum
post_process=lambda i: i, #Post Process is applied to final output only (Not used when training), but is used when checki
It also has a class method that creates an instance of the class with 2 of the fields initialized randomly. It has a subclass that cheats a bit by making 2 of the functions passed in as a parameter unnecessary.
All of this has a lot of redundant information. I have found bugs in my code caused by mistakes in the constructor calls because they were so long. I indeed found another while writing this question.
How can I improve this?
ParentClass
```
class NeuralNet:
def __init__(self,
weights,
biases,
learning_rate=0.01,
momentum=0.9, #Set to zero to nut use mementum
post_process=lambda i: i, #Post Process is applied to final output only (Not used when training), but is used when checkign error rate
topErrorFunc=squaredMean, d_topErrorFunc=dSquaredMean,
actFunc=sigmoid, d_actFunc=dSigmoid,
topActFunc = sigmoid, d_topActFunc = dSigmoid):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
self.post_process = post_process
self.topErrorFunc = topErrorFunc
self.d_topErrorFunc = d_topErrorFunc
self.actFunc = actFunc
self.d_actFunc=d_actFunc
self.topActFunc = topActFunc
self.d_topActFunc = d_topActFunc
@classmethod
def random_init(cls,
layer_sizes,
learning_rate=0.01,
momentum=0.9, #Set to zero to nut use mementum
post_process=lambda i: i, #Post Process is applied to final output only (Not used when training), but is used when checki
Solution
There's three ways I could see this going.
stronger hierarchy
Create what amounts to an abstract base class for NeuralNet with stubs for the math functions and then make subclasses to override the methods.
This would work well if there is a high correlation between the optional functions: if they tend to cluster together they'd make a natural class hierarchy (and you'd have an easy way to see which nodes were using which function sets just by looking at their concrete classes). OTOH this won't work well if the functions are not correlated.
Kwargs to the rescue
You can simplify the constructor logic by using kwargs and by including the defaults in the init method (for sanity's sake I'd move the pure data into required parameters, but that's just aesthetics);
function components
If a lot of reasoning goes into the choice or relationship of the functions, you could just put them all into an object and work with those objects instead of loose funcs:
This would let you compose and reuse a collection of funcs into a DefaultFuncs and then reuse it -- default funcs is really just an elaborate Tuple tricked out so you can call into the functions from the owning NeuralNet instance
All of these are functionally identical approaches (it sounds like you've already got the thing working and just want to clean it up). The main reasons for choosing among them amount to where you want to put the work #1 is good if the functions correlate and you want to easily tell when a given net instance is using a set; #2 is just syntax sugar on what you've already got; #3 is really just #2 except that you compose a function set as class (perhaps with error checking or more sophisticated reasoning) instead of a a dictionary
stronger hierarchy
Create what amounts to an abstract base class for NeuralNet with stubs for the math functions and then make subclasses to override the methods.
class NeuralNetBase(object):
def __init__(self, biases, learning_rate=0.01, momentum=0.9):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
def act_func(self):
raise NotImplemented
def d_act_func(self):
raise NotImplemented
def top_act_func(self):
raise NotImplemented
def d_top_act_func(self):
raise NotImplemented
class SigmoidNeuralNet(NeuralNetBase):
def act_func(self):
# magic here
def d_act_func(self):
# more magic
def top_act_func(self):
# even more...
def d_top_act_func(self):
# like hogwarts!This would work well if there is a high correlation between the optional functions: if they tend to cluster together they'd make a natural class hierarchy (and you'd have an easy way to see which nodes were using which function sets just by looking at their concrete classes). OTOH this won't work well if the functions are not correlated.
Kwargs to the rescue
You can simplify the constructor logic by using kwargs and by including the defaults in the init method (for sanity's sake I'd move the pure data into required parameters, but that's just aesthetics);
class NeuralNet(object):
def __init__(self, biases, learning_rate, momentum, **kwargs):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
# assuming the default implementations 'private' class methods
# defined below
self.act = kwargs.get('act_func', self._default_act_func)
self.top_act = kwargs.get('top_act_func', self._default_top_act_func)
self.d_act = kwargs.get('d_act_func', self._default_d_act_func)
self.d_top_act_func = kwargs.get('top_d_act_func', self._default_d_top_act_func)
self.postprocess = kwags.get('post', self._default_postprocess)function components
If a lot of reasoning goes into the choice or relationship of the functions, you could just put them all into an object and work with those objects instead of loose funcs:
class NeuralNet(object):
def __init__(self, biases, learning_rate, momentum, funcs = DefaultFuncs):
self.weights = weights
self.biases = biases
#... etc
self.Functions = funcs(self)
class DefaultFFuncs(object):
def __init__(self, target, act_func, top_func, d_act_func, d_top_func);
self.Target = target
self._act_func = act_func
self._act_func = act_func
self._d_top_func = d_top_func
self._d_top_func = d_top_func
def act_func(self):
return self._act_func(self.Target)
def d_act_func(self):
return self._d_act_func(self.Target)
def top_act_func(self):
return self._top_act_func(self.Target)
def d_top_act_func(self):
return self._d_top_act_func(self.Target)This would let you compose and reuse a collection of funcs into a DefaultFuncs and then reuse it -- default funcs is really just an elaborate Tuple tricked out so you can call into the functions from the owning NeuralNet instance
All of these are functionally identical approaches (it sounds like you've already got the thing working and just want to clean it up). The main reasons for choosing among them amount to where you want to put the work #1 is good if the functions correlate and you want to easily tell when a given net instance is using a set; #2 is just syntax sugar on what you've already got; #3 is really just #2 except that you compose a function set as class (perhaps with error checking or more sophisticated reasoning) instead of a a dictionary
Code Snippets
class NeuralNetBase(object):
def __init__(self, biases, learning_rate=0.01, momentum=0.9):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
def act_func(self):
raise NotImplemented
def d_act_func(self):
raise NotImplemented
def top_act_func(self):
raise NotImplemented
def d_top_act_func(self):
raise NotImplemented
class SigmoidNeuralNet(NeuralNetBase):
def act_func(self):
# magic here
def d_act_func(self):
# more magic
def top_act_func(self):
# even more...
def d_top_act_func(self):
# like hogwarts!class NeuralNet(object):
def __init__(self, biases, learning_rate, momentum, **kwargs):
self.weights = weights
self.biases = biases
assert len(self.biases)==len(self.weights), "Must have as many bias vectors as weight matrixes"
self._prevDeltaWs = [np.zeros_like(w) for w in self.weights]
self._prevDeltaBs = [np.zeros_like(b) for b in self.biases]
self.learning_rate = learning_rate
self.momentum = momentum
# assuming the default implementations 'private' class methods
# defined below
self.act = kwargs.get('act_func', self._default_act_func)
self.top_act = kwargs.get('top_act_func', self._default_top_act_func)
self.d_act = kwargs.get('d_act_func', self._default_d_act_func)
self.d_top_act_func = kwargs.get('top_d_act_func', self._default_d_top_act_func)
self.postprocess = kwags.get('post', self._default_postprocess)class NeuralNet(object):
def __init__(self, biases, learning_rate, momentum, funcs = DefaultFuncs):
self.weights = weights
self.biases = biases
#... etc
self.Functions = funcs(self)
class DefaultFFuncs(object):
def __init__(self, target, act_func, top_func, d_act_func, d_top_func);
self.Target = target
self._act_func = act_func
self._act_func = act_func
self._d_top_func = d_top_func
self._d_top_func = d_top_func
def act_func(self):
return self._act_func(self.Target)
def d_act_func(self):
return self._d_act_func(self.Target)
def top_act_func(self):
return self._top_act_func(self.Target)
def d_top_act_func(self):
return self._d_top_act_func(self.Target)Context
StackExchange Code Review Q#41440, answer score: 6
Revisions (0)
No revisions yet.