HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Maximizing the sum of selected elements in a matrix

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
theelementsmaximizingselectedsummatrix

Problem

I’m trying to find an efficient algorithm for the following optimization problem:

Given a matrix $A$ with elements $a_{ij}$ and dimension $k$, select exactly $n$ elements from $A$ ($n<k$) such that the sum of the elements is maximized, subject to the constraint that any index of $a_{ij}$ is only used once. In other words: if element $a_{ij}$ has been selected, no other elements from rows and columns i and j can be selected.

Notes:

- It is allowed to select elements from the diagonal.

- The order of magnitude of $k$ is 10,000.

I have a dynamic programming algorithm but the downside is that it can't be parallelized.

The problem resembles the assignment problem. This can be solved using the Hungarian Algorithm, which can be easily parallelized using Java 8 streams. However, the assignment problem has the constraint that from each row and column only one element can be selected. This is less strict than above problem since it allows to select both $a_{ij}$ and $a_{ji}$.

Question: do you know how to adjust the Hungarian Problem for the stricter constraint?

Context:

This algorithm is used to optimize large-scale power storage, which can be modelled as a collection of spread options.

Solution

The problem you want to solve is (a slight variation of) maximum weighted matching in general (i.e., not necessarily bipartite) graphs. There are several algorithms with various worst-case bounds:

  • "Data structures for weighted matching and nearest common ancestors with linking" (Gabow 1990) is the best "in theory" with $O(nm + n^2\log n)$ time complexity.



  • "Faster scaling algorithms for general graph-matching problems" (Gabow and Tarjan 1991) has time complexity $O(m\log(nN)\sqrt{n\alpha(n, m)\log n})$, where $N$ is the largest cost value and $\alpha(n, m)$ is the very slow-growing "inverse Ackermann" function.



  • "Computing minimum-weight perfect matchings" (Cooke and Rohe 1999) describes the "Blossom IV" algorithm, which apparently performs well in practice despite a theoretical worst case $O(n^3)$ time. The paper also gives a good overview of existing algorithms.



  • "Implementation of $O(nm\log n)$ weighted matchings in general graphs: the power of data structures" (Mehlhorn and Schäfer 2002) apparently performs even better in most cases, and also comes with a better worst-case guarantee. There is an implementation in LEDA.



  • "Blossom V: A new implementation of a minimum cost perfect matching algorithm" (Kolmogorov 2009) claims to outperform the previous algorithm a majority of the time.



The above algorithms don't allow "self-edges" (choosing elements along the diagonal). To accommodate these, for every vertex $v_i$, create an extra vertex $u_i$, and an edge $(v_i, u_i)$ with weight $a_{ii}$.

Note that while some of these papers talk about the maximum weight perfect matching problem, in which we have the additional constraint that every vertex must be incident on an edge in the matching, you can turn any instance of the plain maximum weight matching problem into an instance of the perfect kind by adding, for every existing vertex $v_i$, a new vertex $v'_i$, a zero-cost edge $(v_i, v'_i)$, and another zero-cost edge between every pair ($v'_i, v'_j)$ of new vertices. (Perhaps another construction using fewer zero-weight edges is possible?)

I found most of the above information in https://people.mpi-inf.mpg.de/~mehlhorn/ftp/WeightedMatchings.pdf, which has a lot of useful information on implementation and theory.

Context

StackExchange Computer Science Q#93017, answer score: 5

Revisions (0)

No revisions yet.