Skip to main content

Discrete Mathematics for Computer Science: An open educational resource.

Section 8.6 Growth of Functions

How long will it take to process the company payroll once we complete our planned merger? Should I buy a new payroll program from vendor X or vendor Y? If a particular program is slow, is it badly implemented or is it solving a hard problem?
Questions like these ask us to consider the difficulty of a problem, or the relative efficiency of two or more approaches to solving a problem.
This section and the next introduce the motivation, basic notation, and fundamental techniques of algorithm analysis. In this section we focus on Big O notation a mathematical notation that describes the limiting or asymptotic behavior of a function as the input values become large. In the next section we will use Big O notation to analyze running times of different algorithms using asymptotic algorithm analysis.

Subsection 8.6.1 Big O Notation

Big O is the most frequently used asymptotic notation. It is often used to give an upper bound on the growth of a function, such as the running time of an algorithm. Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same Big O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function

Definition 8.6.1. Big O.

Given functions f(x),g(x):RR with g non-negative, we say f(x) is Big O g(x) or:
f(x)=O(g(x))
if and only if there exists a constant c0 and a k such that for all xk:
|f(x)|cg(x)
This definition is rather complicated, but the idea is simple: f(x)=O(g(x)) means f(x) is less than or equal to g(x), except that we’re willing to ignore a constant factor, namely c, and to allow exceptions for small x, namely, for xk .

Example 8.6.2. Prove 10x2=O(x2).

If we choose c=10 and k=0, then 10x2=O(x2) holds because for all x0,
|10x2|10x2.
In the following SageMath cell, we plot f(x)=10x2, g(x)=x2, and cg(x) with c=10. The resulting plot shows that f(x) (red) never exceeds cg(x) (black dots). That is what we want to show for Big O.

Example 8.6.3. Prove x2+100x+10=O(x2).

If we take the polynomial x2+100x+10 and change it so that all of the constant cofficients are multiplied with x2 we have:
x2+100x2+10x2=111x2.
It must be true that the original polynomial is less than this new equation:
x2+100x+10x2+100x2+10x2, with x1.
Therefore, if we choose c=111 and k=1 we have x2+100x+10=O(x2)
In the following SageMath cell, we plot f(x)=x2+100x+10, g(x)=x2, and cg(x) with c=111. The resulting plot shows that f(x) (red) never exceeds cg(x) after x=1 (black dots). That is what we want to show for Big O.

Proof.

Given a polynomial:
akxk+ak1xk1++a1x+a0
if we replace all of the lower-order x terms with the highest order xk we have:
akxk+ak1xk1++a1x+a0akxk+ak1xk++a1xk+a0xk=(ak+ak1++a1+a0)xk
Therefore,
akxk+ak1xk1++a1x+a0=O(xk)
for c=(ak+ak1++a1+a0) and k1.

Example 8.6.5. Prove 6x42x3+5=O(x4).

If we have negative coefficients in a polynomial, Theorem 8.6.4 still applies because Big O as defined in Definition 8.6.1 is an upper bound on the absolute value of f(x) and g(x) must be non-negative.
Here we take the absolute value of our polynomial |6x42x3+5| and multipling all of the constant cofficients by x4 we have:
|6x42x4+5x4|=6x4+|2x4|+5x4=13x4.
Therefore, we have
6x42x3+513x4=O(x4)
With c=13 and k=1.
We can actually get a closer bound if we realize that the 2x3 is only making the outcome of the polynomial smaller:
|6x42x3+5|<|6x4+5|.
Therefore, we can also say that 6x42x3+5=O(x4) for c=11 and k=1.
We only need to show that one c and one k exist to prove Big O. Once they are found, the bound is also true for any larger values of c and k.
In the following SageMath cell, we plot f(x)=6x42x3+5 (red), g(x)=x4 (blue), and cg(x) with c=13 (black dots). The resulting plot shows that f(x) never exceeds cg(x) after x=1. That is what we want to show for Big O.

Example 8.6.6. Big O of n!

n!=n(n1)21 for n1. There are n terms total. Since n1, it must be true that:
n(n1)21nnnn=nn
Therefore, n!=O(nn) with c=1 and k=1.

Subsection 8.6.2 Big O For Common Functional Forms

In Section 9.3 we will be using several functions to analyze the running time of algorithms. The most commonly encountered functions are shown in the Figure 8.6.7 below.
Plot showing growth of common functions.
Figure 8.6.7. Growth of functions commonly used for algorithm analysis. Image by Cmglee CC BY-SA 4
 1 
creativecommons.org/licenses/by-sa/4.0
In addition to polynomials, we have the following Big O estimates:
  1. Any power of n is Big O any larger power of n, powers greater than 1:
    nc=O(nd),1<c<d.
  2. A logarithm of n to a power is Big O any power of n, for log bases greater than 1:
    (logbn)c=O(nd),0<c,d.
  3. Any power of n is Big O an exponential in n:
    nc=O(bn),1<b,0<c.
  4. Any exponential is Big O an exponential of a larger constant:
    bn=O(cn),1<b<c
Table 8.6.8. Orders of Common FunctionsHere is a list of orders (or classes) of functions that are commonly encountered when analyzing algorithms. In each case, c is a positive constant and n increases without bound. Functions are listed from slowest growing at the top to fastest growing at the bottom.
Notation Name Example
O(1) Constant time No looping; Determining if a binary number is even or odd; Calculating (1)n
O(logn) Logarithmic time Loops where n is divided by a constant each iteration;
Finding an item in a sorted array with a BinarySearch.
O(n) Linear time One loop over all n; Finding an item in an unsorted list or in an unsorted array
O(nlogn)=O(logn!) Linearithmic time Algorithms that divide n by a constant, but then need to reassemble all n items in some way;
Fastest possible comparison sort; HeapSort and MergeSort
O(n2) Quadratic time Two nested loops both over n; Multiplying two n-digit numbers by a simple algorithm;
Simple sorting algorithms, such as BubbleSort, SelectionSort, and InsertionSort
O(nc) Polynomial time c nested loops over n; Matrix multiplication using the standard algorithm.
O(cn), c>1 Exponential time Recursive algorithms that make two or more recursive calls each time;
Determining if two logical statements are equivalent using a truth table (2n rows)
O(n!) Factorial time Solving the travelling salesman problem via brute-force search; enumerating all partitions of a set
O(nn) n to the n Resulting value of f(n)=n!

Subsection 8.6.3 Big O For Combinations of Functions

The Big O of multiple functions can be combined. If we have a complex function, we can break it down into simpler parts then find the Big O of the simpler functions and combine them.
  • If two functions are added together, their Big O is the Big O of the larger one.
    • If f1(x)=O(g1(x)) and f2(x)=O(g2(x)):
      f1+f2(x)=O(max(g1(x),g2(x))
  • If two functions are multiplied together, their Big O is the product of the Big O of both.
    • If f1(x)=O(g1(x)) and f2(x)=O(g2(x)):
      f1f2(x)=O(g1(x)g2(x))

Example 8.6.9. Big O of 7x3log2x.

Here we have two functions multiplied together: f(x)=x3 and g(x)=log2x. We can calculate the Big O of both separately and combine them.
For the first one Theorem 8.6.4 applies so we have f(x)=O(x3). For g(x) there are no powers or constants so the Big O is just log2(x).
Multiplying: 7x3log2x=O(x3log2x)

Example 8.6.10. Big O of 9x2+3x.

Here we have two functions added together: f(x)=9x2 and g(x)=3x. We must calculate the Big O of both separately and decide which is largest.
For the first one Theorem 8.6.4 applies so we have f(x)=O(x2). For g(x), this is an exponential in x, so the Big O is just the function itself: 3x. Looking at Figure 8.6.7 we can see that 3x grows much faster than the polynomial x2, therefore 9x2+3x=O(3x)

Subsection 8.6.4 Big Omega

This is almost the same definition as Big O, except that Big Ω (Omega) gives a lower bound on the growth of a function or running time of an algorithm. It is used to describe the best case running time for an algorithm on a given data size.

Definition 8.6.11. Big Ω.

Given functions f(x),g(x):RR with f non-negative, we say f(x) is Big Ω (Omega) g(x) or:
f(x)=Ω(g(x))
to mean
g(x)=O(f(x)).
f(x)=Ω(g(x)) if and only if there exists a constant c0 and a k such that for all xk:
cf(x)|g(x)|
To prove: f(x)=Ω(g(x)) we just flip the equation around into Big O form: g(x)=O(f(x)) and find a c and k where f(x) is greater than or equal to g(x).

Subsection 8.6.5 Big Theta

Big Θ (Theta) combines Big O and Big Ω to specify that the growth of a function (or running time of an algorithm) is always equal to some other function plus or minus a constant factor. We take the definitions of both Big O and Big Ω to give both an upper and lower bound. It is used to describe the best and worst case running times for an algorithm on a given data size.

Definition 8.6.12. Big Θ.

Given functions f(x),g(x):RR with f,g non-negative, we say f(x) is Big Θ (Theta) g(x) or:
f(x)=Θ(g(x))
if and only if:
f(x)=O(g(x))
and
g(x)=O(f(x)).
f(x)=Θ(g(x)) means f(x) and g(x) are equal to within a constant factor.
To prove: f(x)=Θ(g(x)) we must show f(x)=O(g(x)) for some c1 and k1 and then flip the equation around and show: g(x)=O(f(x)) for some c2 and k2.
In the following SageMath cell, we plot f(x)=x4+2x (red), this is an addition of two functions with the larger one being the exponential 2x, so it should be Θ(2x). We show this by using g(x)=2x and plotting both c1g(x) with c1=20 to show f(x)=O(2x) and c2g(x) with c2=1 to show f(x)=Ω(2x) with black dots. The resulting plot shows that f(x) is bounded above by c1g(x) after x7 and it is always bounded below by c2g(x) . That is what we want to show for Big Θ.

Exercises 8.6.6 Exercises for Section 8.6

1.

Show that f(x)=8x35x2+7x+13 is Θ(x3). Find c1,c2,k1,k2.
Answer.
for all x1
8x35x2+7x+138x3+7x+138x3+7x3+13x3=28x3
So 8x35x2+7x+13 is O(x3) with c1=28,k1=1
8x35x2+7x+138x35x28x35x2+7x+138x3x3x35x2 if x>5=7x3
So 8x35x2+7x+13 is Ω(x3) with c2=7,k2=5 Therefore 8x35x2+7x+13 is Θ(x3).

2.

Show that f(x)=10x2+x22 is Θ(x2). Find c1,c2,k1,k2.

3.

Show that f(x)=x5O(x4)
Answer.
x5x4x5x4x4x4x1
This is false, because by definition of Big O the equation must be true for all values of x>k, for some value k. Therefore x5O(x4)

4.

For each pair of functions below, determine whether f(x)=O(g(x)) and whether g(x)=O(f(x)). If one function is Big O of the other, give values for c and k.
  1. f(x)=x2,g(x)=3x.
  2. f(x)=3x7x+4,g(x)=4.
  3. f(x)=2x,g(x)=2x2.
  4. f(x)=log(n!),g(x)=log(nn).
  5. f(x)=x10,g(x)=10x.

5.

Determine whether each of the functions: log(3n+2) and log(n3+1) is O(logn) by finding a valid c and k for each, or showing they do not exist.
Answer.
log(3n+2)log(3n+n),n>2=log(4n)=log4+lognlogn+logn, if n>4.=2logn.
Therefore log(3n+2) is O(logn) with c=2,k=4.
log(n3+1)log(n3+n3),n>1=log(2n3)=log2+logn3=log2+3logn4logn, if n>2.
Therefore log(n3+1) is O(logn) with c=4,k=2.

6.

Arrange these functions into a sequence from smallest to largest Big O
  1. nlogn
  2. 2100n
  3. n!
  4. 22100
  5. 22n
  6. 2n
  7. 3n
  8. n2n
  9. 2n
  10. 3n
  11. n2
  12. n3
  13. log(n!)