patternMinor
Does this algorithm for permuting rows and columns of a matrix converge?
Viewed 0 times
thisrowscolumnspermutingalgorithmfordoesconvergeandmatrix
Problem
Given a binary matrix, define the magnitude of a row by reading off the numbers in it left to right and similarly for columns, reading the numbers going down. For instance, the row magnitude of the 1-st row of this matrix is 1001101, and for the 2-nd column it is 11.
1 0 0 1 1 0 1
0 0 0 1 1 0 1
0 1 1 0 0 1 0
0 1 1 0 0 1 0
I would like to sort a matrix by doing the following:
-
reorder rows so that rows are ordered in descending order according to their row magnitude
-
reorder columns so that columns are ordered in descending order according to their column magnitude
-
repeat 1 and 2 until rows and columns are in a correct order
For the matrix above this would go as follows:
sort by rows (step 1)
1 0 0 1 1 0 1
0 1 1 0 0 1 0
0 1 1 0 0 1 0
0 0 0 1 1 0 1
sort by columns (step 2)
1 1 1 1 0 0 0
0 0 0 0 1 1 1
0 0 0 0 1 1 1
1 1 1 0 0 0 0
sort by rows (step 1)
1 1 1 1 0 0 0
1 1 1 0 0 0 0
0 0 0 0 1 1 1
0 0 0 0 1 1 1
both columns and rows are in order, terminate procedure.
For the majority of matrices, performing steps 1 and 2 once won't be sufficient, since reordering columns will ruin the ordering of the rows. However, strangely enough it seems like 3 reorderings are sufficient to order matrices I've tried it on.
Despite not being able to find a counterexample, I struggle to see why this would be the case, and even more generally, what guarantees the algorithm won't get stuck in a cycle. Is there an existing name or more formal way to describe this algorithm and are there any proofs on its convergence?
1 0 0 1 1 0 1
0 0 0 1 1 0 1
0 1 1 0 0 1 0
0 1 1 0 0 1 0
I would like to sort a matrix by doing the following:
-
reorder rows so that rows are ordered in descending order according to their row magnitude
-
reorder columns so that columns are ordered in descending order according to their column magnitude
-
repeat 1 and 2 until rows and columns are in a correct order
For the matrix above this would go as follows:
sort by rows (step 1)
1 0 0 1 1 0 1
0 1 1 0 0 1 0
0 1 1 0 0 1 0
0 0 0 1 1 0 1
sort by columns (step 2)
1 1 1 1 0 0 0
0 0 0 0 1 1 1
0 0 0 0 1 1 1
1 1 1 0 0 0 0
sort by rows (step 1)
1 1 1 1 0 0 0
1 1 1 0 0 0 0
0 0 0 0 1 1 1
0 0 0 0 1 1 1
both columns and rows are in order, terminate procedure.
For the majority of matrices, performing steps 1 and 2 once won't be sufficient, since reordering columns will ruin the ordering of the rows. However, strangely enough it seems like 3 reorderings are sufficient to order matrices I've tried it on.
Despite not being able to find a counterexample, I struggle to see why this would be the case, and even more generally, what guarantees the algorithm won't get stuck in a cycle. Is there an existing name or more formal way to describe this algorithm and are there any proofs on its convergence?
Solution
For an $n \times m$ matrix $M$, define the potential function
$$
\Phi(M) = \sum_{i=1}^n \sum_{j=1}^m 2^{n-1-i} 2^{m-1-j} M(i,j).
$$
If we write it as
$$
\Phi(M) = \sum_{i=1}^n 2^{n-1-i} \sum_{j=1}^m 2^{m-1-j} M(i,j)
$$
then it immediately follows that reordering rows either fixes $M$, or strictly increases $\Phi$. Similarly, reordering columns either fixes $M$ or strictly increases $\Phi$. Since there are only finitely many $n \times m$ matrices, it follows that your algorithm eventually converges.
It is definitely not the case that the process terminates after three steps. For example, here is a matrix which requires five steps:
$$
\begin{pmatrix}
0&0&1&0 \\
0&1&0&0 \\
1&0&0&1 \\
0&0&1&1
\end{pmatrix}
$$
This is a matrix in which the $(i,j)$'th entry depends only on $i+j$. Specifically, $M(i,j) = 1$ iff $i+j \in \{4,7,8\}$.
More generally, the $n \times n$ matrix $M$ in which $M(i,j)=1$ iff $i+j \in \{4,2n-1,2n\}$ requires $2n-3$ rounds, which is optimal for small values of $n$ (over all $n \times n$ matrices). Moreover, this is the unique choice over matrices of this form.
(To complete the picture, for $n = 3$ the unique optimal choice is $i + j \in \{4,6\}$ rather than $i+j \in \{4,5,6\}$, and for $n = 2$ the only choices are $i+j \in \{4\}$ and $i+j \in \{3,4\}$, which both correspond to two rounds.)
$$
\Phi(M) = \sum_{i=1}^n \sum_{j=1}^m 2^{n-1-i} 2^{m-1-j} M(i,j).
$$
If we write it as
$$
\Phi(M) = \sum_{i=1}^n 2^{n-1-i} \sum_{j=1}^m 2^{m-1-j} M(i,j)
$$
then it immediately follows that reordering rows either fixes $M$, or strictly increases $\Phi$. Similarly, reordering columns either fixes $M$ or strictly increases $\Phi$. Since there are only finitely many $n \times m$ matrices, it follows that your algorithm eventually converges.
It is definitely not the case that the process terminates after three steps. For example, here is a matrix which requires five steps:
$$
\begin{pmatrix}
0&0&1&0 \\
0&1&0&0 \\
1&0&0&1 \\
0&0&1&1
\end{pmatrix}
$$
This is a matrix in which the $(i,j)$'th entry depends only on $i+j$. Specifically, $M(i,j) = 1$ iff $i+j \in \{4,7,8\}$.
More generally, the $n \times n$ matrix $M$ in which $M(i,j)=1$ iff $i+j \in \{4,2n-1,2n\}$ requires $2n-3$ rounds, which is optimal for small values of $n$ (over all $n \times n$ matrices). Moreover, this is the unique choice over matrices of this form.
(To complete the picture, for $n = 3$ the unique optimal choice is $i + j \in \{4,6\}$ rather than $i+j \in \{4,5,6\}$, and for $n = 2$ the only choices are $i+j \in \{4\}$ and $i+j \in \{3,4\}$, which both correspond to two rounds.)
Context
StackExchange Computer Science Q#139020, answer score: 5
Revisions (0)
No revisions yet.