patternMinor
Distributing resources for maximum gain
Viewed 0 times
maximumdistributinggainforresources
Problem
This feels like it would be a well researched (or solved) problem, but I can't find the right words to search for it.
Suppose there is a collection of shared resources, and a collection of possible processes. Each process consumes fixed quantities of each resource and provides a fixed benefit. Each process can be applied zero or more times (but no fractional amounts), and resources can be left unused.
How can a distribution which maximises the total benefit be found? Brute-forcing each possible allocation will always find an answer, but is there a more optimal way?
An attempt to formalise the question:
$$
\text{Given resources } R \in \mathbb{R}_{\ge0}^n \\
\text{and process requirements } P \in \mathbb{R}_{\ge0}^{m,n} \\
\text{and process outputs } O \in \mathbb{R}_{\ge0}^m \\
\text{find } N \in \mathbb{Z}_{\ge0}^m \\
\text{such that no resource is over-spent: } \sum_{i=0}^m{N_i P_{i,j}} \le R_j \ \forall \ j \\
\text{and } N \cdot O \text{ is maximised}
$$
Suppose there is a collection of shared resources, and a collection of possible processes. Each process consumes fixed quantities of each resource and provides a fixed benefit. Each process can be applied zero or more times (but no fractional amounts), and resources can be left unused.
How can a distribution which maximises the total benefit be found? Brute-forcing each possible allocation will always find an answer, but is there a more optimal way?
An attempt to formalise the question:
$$
\text{Given resources } R \in \mathbb{R}_{\ge0}^n \\
\text{and process requirements } P \in \mathbb{R}_{\ge0}^{m,n} \\
\text{and process outputs } O \in \mathbb{R}_{\ge0}^m \\
\text{find } N \in \mathbb{Z}_{\ge0}^m \\
\text{such that no resource is over-spent: } \sum_{i=0}^m{N_i P_{i,j}} \le R_j \ \forall \ j \\
\text{and } N \cdot O \text{ is maximised}
$$
Solution
This is a knapsack problem. In the most basic version, you have a single resource (e.g., space in your knapsack) and you're trying to choose which items to put in it so you can carry the greatest value. People have also studied multi-constraint knapsack problems, which is the variant you're directly asking about.
The bad news is that knapsack problems are, in general, NP-complete. The good news is that they can be solved in pseudopolynomial1 time using dynamic programming, which is often practical. They can also be approximated fairly well: for any fixed $\varepsilon$, there is a polynomial time algorithm that will give you the answer to within a factor of $1\pm\varepsilon$. However, the dependence on the running time is superpolynomial, unless P$\,=\,$NP.
1 Pseudopolynomial time means that the running time is a polynomial function of the numbers represented in the input, whereas polynomial time means the running time is a polynomial function of the size of the input in bits. For example, the algorithm
The bad news is that knapsack problems are, in general, NP-complete. The good news is that they can be solved in pseudopolynomial1 time using dynamic programming, which is often practical. They can also be approximated fairly well: for any fixed $\varepsilon$, there is a polynomial time algorithm that will give you the answer to within a factor of $1\pm\varepsilon$. However, the dependence on the running time is superpolynomial, unless P$\,=\,$NP.
1 Pseudopolynomial time means that the running time is a polynomial function of the numbers represented in the input, whereas polynomial time means the running time is a polynomial function of the size of the input in bits. For example, the algorithm
read n; for i=1 to n do print "hello"; end is pseudopolynomial: if its input is $n$, it performs $n$ iterations but, if the input is written as a $b$-bit binary number, then $n$ could be as big as $2^b$, so the algorithm isn't truly polynomial.Context
StackExchange Computer Science Q#72659, answer score: 5
Revisions (0)
No revisions yet.