HiveBrain v1.2.0
Get Started
← Back to all entries
patternpythonMinor

Resource-constrained project scheduling

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
projectschedulingconstrainedresource

Problem

I'm trying to implement an algorithm for a resource-constrained project scheduling problem. I have several resources, resource constraints and all of this is in integer time domain. With my class ResourceUtilization I want to assure that resource constraints are not violated in all time. Key element of my class is array utilization with size (num_resources, max_makespan). I add an activity of given resource demands to ResourceUtilization from start_time to finish_time. I also need to remove that activity at some point. Most called method is_free() checks whether I can add an activity to a time interval.

```
import numpy as np
cimport numpy as np
import cython

DTYPE = np.int
ctypedef np.int_t DTYPE_t

cdef class ResourceUtilization:
cdef DTYPE_t[:, :] res_constraints
cdef DTYPE_t[:, :] utilization
cdef DTYPE_t max_makespan
cdef DTYPE_t num_resources

def __init__(self, np.ndarray[DTYPE_t, ndim=2] res_constraints, DTYPE_t num_resources, DTYPE_t max_makespan):
self.res_constraints = res_constraints
self.num_resources = num_resources
self.max_makespan = max_makespan
self.utilization = np.zeros([self.num_resources, max_makespan], dtype=DTYPE)

def add(self, np.ndarray[DTYPE_t, ndim=2] demands, DTYPE_t start_time, DTYPE_t finish_time):
"""
Add demands from interval self.max_makespan:
self.extend_makespan(finish_time)
copy_util = self.utilization[:, start_time:finish_time] + demands
self.utilization[:, start_time:finish_time] = copy_util

def remove(self, np.ndarray[DTYPE_t, ndim=2] demands, DTYPE_t start_time, DTYPE_t finish_time):
"""
Remove demands from interval self.max_makespan:
difference = self.max_makespan * np.floor(minimal_extend_time / self.max_makespan)
extension = np.zeros([self.num_resources, difference], dtype=DTYPE)
self.utilization = np.hstack((self.utilization, extension))

Solution

If you look at the generated code it's no wonder that the solution is
the same as regular NumPy/Python since it's not taking advantage of
Cython in any way. There are still calls to the same NumPy functions
and the loop/np.all call isn't inlined, so there's no opportunity for
a speed-up.

Except for algorithmic changes, one option here is to create a loop
which does the same thing as the combination of np.all plus slicing
and test if that's faster. If so, then there'd also be alternatives
with parallel processing/threading to make the test faster. Then again,
the arrays need to be quite large for that to offset the increased
complexity.

I'm having a bit of trouble with the intended shapes of the
vectors/matrixes; the idea would be to have something like this (but the
code doesn't quite work so far!, as the test variable should probably
be a vector instead?):

def is_free(self, np.ndarray[DTYPE_t, ndim=2] demands, DTYPE_t start_time, DTYPE_t finish_time):
"""
Check whether we can add demands to interval test):
return False
# OR inline this test, iterate through the vector directly
for j in range(self.utilization.shape[0]):
if self.utilization[j, i] > test[j]):
return False

return True


Also, get_capacity with the settings from test_ru throws an
exception relating to the shapes confusion:

>>> ru.get_capacity(0, 1)
Traceback (most recent call last):
File "", line 1, in
File "resource_ru.pyx", line 61, in resource_ru.ResourceUtilization.get_capacity (...)
return self.res_constraints[resource] - self.get(resource, time)
TypeError: unsupported operand type(s) for -: 'resource_ru._memoryviewslice' and 'int'

Context

StackExchange Code Review Q#79237, answer score: 2

Revisions (0)

No revisions yet.