HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Complexity of multiplication

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
complexitymultiplicationstackoverflow

Problem

I've been reading around the area of complexity and arithmetic operations using logic gates; one thing that is confusing me is that
\begin{equation}
\Theta (n^{2})
\end{equation}
is quoted as being the complexity for multiplication for iterative adition. But addition of a number requires \begin{equation}
log_2(n)
\end{equation}
operations, 1 for each bit or 8 times that for each nand gate involved in doing this. So it strikes me as obvious that adding that number n times will have a complexity of
\begin{equation}
n \log_2(n)
\end{equation}
Which is definitely less than
\begin{equation}
\Theta (n^{2})
\end{equation}
So where is this additional factor of
\begin{equation}
\frac{n}{\log_2(n)}
\end{equation}
coming from?

Solution

Addition of a number of size $n$ takes time $O(n)$. Don't confuse a number and its encoding size, which is logarithmically smaller.

When multiplying an $n$-bit integer $a$ by an $n$-bit integer $b$ using the iterative addition algorithm, you are adding up to $n$ shifted copies of $a$. Each addition costs you $O(n)$ rather than $O(\log n)$. The numbers $a,b$ itself could be as large as $2^n$.

Context

StackExchange Computer Science Q#52640, answer score: 8

Revisions (0)

No revisions yet.