HiveBrain v1.2.0
Get Started
← Back to all entries
patternpythonMinor

Recursive function, high performance critical

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
criticalhighfunctionrecursiveperformance

Problem

def uniop_nested(func,o_list):
    def inner(i_list):
        if isinstance(i_list[0],np.ndarray):
            return map(func, i_list)
        else:
            return map(inner, i_list)
    return inner(o_list)

def binop_nested(func, o1, o2):
    if not isinstance(o1,np.ndarray):
        return [binop_nested(func, i1, i2)  for (i1,i2) in zip(o1,o2)]
    else:
        return func(o1,o2)

def add_nested(s1,s2):
    return binop_nested(np.add,s1,s2)


My code need to work with lists of ndarrays and list of lists of ndarrays. Profiling shows this is some of the most performance critical code in my project.

  • How can I optimise it?



  • Can I rewrite the recursion as a loop?



  • Is there nice way to rewrite, them as Cython or a C extention? (I have no experience with this)



My Stack Overflow question here indicates that changing datatypes is probably not going to the solution.

More info:

  • Operands (o1 o2, s1, s2) are short lists. Profiling has shown me that using it.izip is slower.



  • Function return results are unlikely to be repeated. As the ndarrays are full of floats being tweaked with mathematical operations based of floats. (We are talking a large segment of Rn possible values)



  • Functions being applied are simple, the add_nested is the most common op by far, but there are a few onthers like uniop_nested(np.zeros_like, o_list).



  • ndarrays are of different sizes/shapes. (so a multidimentional ndarray won't work)



Context:

This is being used for training Restricted Boltzmann Machines (RBMs) and Neural network.

I have a generic "Trainer" class,

that takes a Trainee class as a parameter.

the Trainee class exposes a few methods like:

  • Get_update_gradient - a function that returns (for a RBM [Restricted Boltzmann Machine]) a list containing a ndarray of weight changes and 2 ndarrays of bias changes, or (for a multilayer neural net) a list containing a list of weight matrix changes and a list of bias changes



  • knowledge: a property expo

Solution

For recursion, you can try adding an @memoize decorator to it, to speed it up.

import functools

def memoize(f):
    cache= {}
    @functools.wraps(f)
    def memf(*x):
        if x not in cache:
            cache[x] = f(*x)
        return cache[x]
    return memf


Not sure if in your case it will speed up a lot, but if you try with fibonacci, you will see that it vastly speeds up the recursions, since it caches previous results.

This is something that speeds up recursion things in general. For your specific case, we need a bit more info, on what your functions are, and what you want to achieve to get a more specific answer.

Code Snippets

import functools

def memoize(f):
    cache= {}
    @functools.wraps(f)
    def memf(*x):
        if x not in cache:
            cache[x] = f(*x)
        return cache[x]
    return memf

Context

StackExchange Code Review Q#44039, answer score: 2

Revisions (0)

No revisions yet.