I had a fun talk about this with an interviewer once when they asked me about the runtime complexity of some algorithm.
Let (f) and (g) be eventually positive functions from (\mathbb{R}_{\ge 0}) to (\mathbb{R}_{>0}). Pick an unlimited hypernatural (H\in{}^*\mathbb{N}\setminus\mathbb{N}) (we write (H\gg 1) for short), and consider the ratio
[R(H);=;\frac{ {}*f(H) }{ {}*g(H) }.]
Then the big O relations become simple checks on (R(H)), where we require that the condition holds for every unlimited (H):
We have (f\in O(g)) if and only if there exists a standard constant (C>0) such that (R(H)\le C). In math: (f\in O(g)\ \iff\ \exists\ \text{standard } C>0:\ R(H)\le C.)
We have (f\in \Omega(g)) if and on…
I had a fun talk about this with an interviewer once when they asked me about the runtime complexity of some algorithm.
Let (f) and (g) be eventually positive functions from (\mathbb{R}_{\ge 0}) to (\mathbb{R}_{>0}). Pick an unlimited hypernatural (H\in{}^*\mathbb{N}\setminus\mathbb{N}) (we write (H\gg 1) for short), and consider the ratio
[R(H);=;\frac{ {}*f(H) }{ {}*g(H) }.]
Then the big O relations become simple checks on (R(H)), where we require that the condition holds for every unlimited (H):
We have (f\in O(g)) if and only if there exists a standard constant (C>0) such that (R(H)\le C). In math: (f\in O(g)\ \iff\ \exists\ \text{standard } C>0:\ R(H)\le C.)
We have (f\in \Omega(g)) if and only if there exists a standard constant (c>0) such that (R(H)\ge c). In math: (f\in \Omega(g)\ \iff\ \exists\ \text{standard } c>0:\ R(H)\ge c.)
We have (f\in \Theta(g)) if and only if there exist standard constants (0<c\le C<\infty) such that (c\le R(H)\le C). In math: (f\in \Theta(g)\ \iff\ \exists\ 0<c\le C<\infty\ \text{standard}:\ c\le R(H)\le C.)
We have (f\in o(g)) if and only if (R(H)\approx 0) (that is, (R(H)) is infinitesimal). In math: (f\in o(g)\ \iff\ R(H)\approx 0.)
We have (f\in \omega(g)) if and only if (R(H)) is unlimited. In math: (f\in \omega(g)\ \iff\ R(H)\ \text{is unlimited}.)
Examples
For (f(x)=x^2) and (g(x)=x^3), we compute (R(H)=1/H), which is infinitesimal. Therefore (x^2\in o(x^3)). In math: (f(x)=x^2,\ g(x)=x^3:\ R(H)=1/H\approx 0\ \Rightarrow\ x^2\in o(x^3).)
For (f(x)=x^3) and (g(x)=x^2), we compute (R(H)=H), which is unlimited. Therefore (x^3\in \omega(x^2)), and hence (x^3\in\Omega(x^2)). In math: (f(x)=x^3,\ g(x)=x^2:\ R(H)=H\ \text{unlimited}\ \Rightarrow\ x^3\in \omega(x^2)\subseteq\Omega(x^2).)
For (f(x)=x+c), where (c) is any standard real number, and (g(x)=x), we compute (R(H)=1+\tfrac{c}{H}). Since (\tfrac{c}{H}) is infinitesimal, we have (R(H)\approx 1). Therefore (x+c\in\Theta(x)). In math: (f(x)=x+c,\ g(x)=x:\ R(H)=1+\tfrac{c}{H}\approx 1\ \Rightarrow\ x+c\in\Theta(x).)
The “for every unlimited (H)” condition encodes the usual “for all sufficiently large (x)” via transfer. A single infinite index can be fooled by oscillations; checking every unlimited (H) gives you the uniformity you need.
For example, let (g(x)=x) and define (f) by parity of the integer part:
- if (\lfloor x\rfloor) is even, set (f(x)=x);
- if (\lfloor x\rfloor) is odd, set (f(x)=\dfrac{x}{\log(x+2)}).
Then (f) is eventually positive, and the ratio (R(x)=\dfrac{f(x)}{g(x)}) oscillates between (1) (on even blocks) and (\dfrac{1}{\log(x+2)}) (on odd blocks). A single check at an even unlimited (H) gives (R(H)=1) and could tempt you to say (f\in\Theta(g)). But along odd unlimited (H) we have (R(H)=\dfrac{1}{\log(H+2)}\approx 0), so (f\notin\Omega(g)) and hence (f\notin\Theta(g)), even though (f\in O(g)). In math: along even (H), (R(H)=1); along odd (H), (R(H)=1/\log(H+2)\approx 0). Requiring the condition for every unlimited (H) prevents this mistake.