HiveBrain v1.2.0
Get Started
← Back to all entries
gotchaMinor

What is the difference between $\log^2(n)$, $\log(n)^2$, $(\log n)^2$, $\log (n^2)$ and $\log \log n$?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
thewhatlogdifferencebetweenand

Problem

In research articles (e.g. this one's abstract), I often see the complexity of an algorithm expressed using a somewhat ambiguous notation:

-
$\log^2(n) = ?$

-
$\log(n)^2 = ?$

-
$(\log n)^2 = (\log(n)) \times (\log(n))$

-
$\log(n^2) = \log(n \times n)$

  • $\log \log n = \log(\log(n))$



The last three case are easy enough to understand, but what about the first two? The operator precedence is unclear. Do they mean $\log(n \times n)$, or do they mean $(\log(n)) \times (\log(n))$, or do they mean $\log(\log(n))$?

Citing a reliable source would be a plus, so that I can confidently add this information to Wikipedia.

Note I'm asking about the meaning of big-O or about $O(\log n)$ complexity. I'm interested in what the $^2$ means (square the $n$, square the result of $\log(n)$, or $\log^2$ = $\log \cdot \log$).

Solution

Regarding the operator precedence, as specified the other answers:

log²(n) : possibly ambiguous
              most commonly 'log(n) * log(n) '
              but possible that some writers intend 'log(log(n))'

log(n)² : ambiguous, log(n) * log(n) or 2log(n)


Now, you asked about their meaning in the context of asymptotic behaviour and, specifically, Big-O notation. Below follows a note regarding seeing research articles state that the time complexity of an algorithm is log(n²), which is, in the context of Big-O notation, somewhat of a misuse of the notation.

First note that

log(n²) = 2log (n)


If some some arbitrary function f(n) is in O(log(n²)) = O(2log(n), then we can show that f(n) is in O(log(n).

From the definition of O(n), we can state the following

f(n) is in O(g(n))

=> |f(n)| ≤ k*|g(n)|, for some constant k>0                 (+)
                      for n sufficiently large (say, n>N)


See e.g. this reference covering Big-O notation.

Now, if f(n) is in O(2log(n), then there exists some set of positive constants k and N such that the following holds

|f(n)| ≤ k*|log(n)|, for n>N                               

 |f(n)| ≤ k/2*|2log(n)|, for n>N                          (++)


Hence, we can just choose a constant k2=k/2 and it follows from (++) that f(n) is in O(log(n).

Key point: The purpose of this explanation is that never should you see an article describe the asymptotic behaviour of an algorithm as O(log(n²)), as this is a redundant way of saying that the algorithm is in O(log(n)).

For explanations regarding what different O(log(n) variations means with regard to the algorithms behaviour, see e.g. the links provided by other answers, as well as the numerous threads on SO covering these subject (e.g. time complexity of binary search, and so on).

Code Snippets

log²(n) : possibly ambiguous
              most commonly 'log(n) * log(n) '
              but possible that some writers intend 'log(log(n))'

log(n)² : ambiguous, log(n) * log(n) or 2log(n)
log(n²) = 2log (n)
f(n) is in O(g(n))

=> |f(n)| ≤ k*|g(n)|, for some constant k>0                 (+)
                      for n sufficiently large (say, n>N)
|f(n)| ≤ k*|log(n)|, for n>N                               

<=> |f(n)| ≤ k/2*|2log(n)|, for n>N                          (++)

Context

StackExchange Computer Science Q#51833, answer score: 8

Revisions (0)

No revisions yet.