HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Can one increment an $n$ bit integer using fewer than $2 - 2^{1-n}$ bit inspections on average?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
canbitincrementthaninspectionsaverageoneusingintegerfewer

Problem

Given an $n$-bit integer, I am interested in performing an increment operation using as few bit reads as possible. The standard binary code (standard binary representation of numbers), requires $n$ bit reads and writes in the worst case, though only $2 - 2^{1-n}$ bit reads on average (i.e $2$ bit reads on average asymptotically). On the other hand, there are codes like the Binary Reflected Gray Code that only require $1$ bit write in the worst case (since all integers in the sequence have hamming distance one). However, the BRGC requires $n$ bit reads in the worst and average case. In addition, there are other codes, such as the Recursive Partition Gray Code, which improve the average number of reads down to $O(\log n)$.

I would like the know if there is a code that can outperform the standard binary code with regards to the average number of reads. I have, however, no restrictions on the number of bit writes or the number of reads in the worst case. I feel like there should be a lower bound that says you cannot do better on average than $2 - 2^{1-n}$, but haven't been able to find a reference.

Edit: I know there is a lower bound of $\log n + 1$ reads in the worst case due to Fredman in 1978, but I am not aware of a lower bound for average case.

Solution

We can prove a tight lower bound of $2 - 2^{1-n}$ as follows. Informally speaking, we will argue inductively that whenever a bit is read during the increment procedure, that one of the two possible outcomes (a $0$ or a $1$ can be read) will cause the algorithm to stop reading additional bits. This will show that if we require the average number of reads to be strictly smaller than $2$, then we required a code isomorphic to the standard binary code.

Suppose that the first bit read by the increment algorithm is bit $i_1 \in [n]$. Now we have three cases to consider. In the first case (A), the algorithm will stop reading additional bits regardless of the value of $i_1$. This means that after reading $i_1$, the algorithm must decide which bits must be flipped right away, and clearly this will not yield a valid increment algorithm for $n > 1$. In the second case (B), the algorithm will adaptively read another bit, regardless of the value of $i_1$. We note that the algorithm may choose to read a different bit depending on the value of $i_1$. In this case, however, we are already reading at least $2$ bits no matter what, so the average number of bits read will exceed $2-2^{1-n}$. Finally, in the third case (C), the algorithm will only read another bit for a single value of $i_1$ (say $1$ for convenience). That is, when $i_1$ is $0$, the algorithm will determine which bits need to be flipped, and when $i_1$ is $1$, the algorithm will probe a second bit. In such a case, the expected number of bit reads is at least $1/2 \cdot 1 + 1/2 \cdot 2 = 3/2$. This is the only case where we can hope to beat the standard binary code.

Now we can apply this argument inductively. Suppose now that the algorithm is adaptively reading bit $i_k$, with $k \leq n-1$. Then in case (A), at most $n-1$ of the bits are ever read by the algorithm. This means that there is a bit that is never read and thus never written to, so the increment algorithm cannot be correct. Here we assume that the only bits that can be flipped are the ones read during that particular increment operation. In case (B), we can verify that the expected number of reads is at least $$\sum_{j=1}^{k-1} j/2^j + 2^{1-k}\cdot(k+1) = 2$$ which is again too many. In case (C), however, the expected number of reads can be verified to be at least $$\sum_{j=1}^{k-1} j/2^j + 2^{-k}\cdot k + 2^{-k} \cdot (k+1) = 2 - 2^{-k}\cdot(2k+3)$$ which is again the only case which might lead us to a better performance than the standard binary code.

When performing the $n$-th bit read, there are no other bits that the algorithm can read (since the other $n-1$ bits have already been read), so only case (A) becomes valid. Computing the expected number of reads then gives exactly $2 - 2^{1-n}$, as required.

I find that it helps to view this as a binary decision tree, where the internal nodes are labelled by the bit read, and the leaves are labelled by the bits that are flipped. So you begin by reading the bit labelled at the root of the tree (in our case $i_1$), then you go left if you've read a $0$ and right if you've read a $1$. With this picture in mind, it becomes clear that there are only three cases to consider at each step of the proof.

Context

StackExchange Computer Science Q#50494, answer score: 3

Revisions (0)

No revisions yet.