big O Notation

Notation:

We have f(n)=O[g(n)].

How to Read:

You should read equals like "is". Is means that everything over here is something over there.

Definition / Explanation:

This means that there are some suitable constants c and n0, such that f is bounded by cg(n) for all sufficiently large n. So, this is pretty intuitive notion. We have seen it before.

 f(n)=O[g(n)] means there are  constants c and n0,

Such that

0<=f(n)<=cg(n)

For all n>=n0

O() notation focuses on the largest term and ignores constants -

  • Largest term will dominate eventually for large enough n. -
  • Constants depend on “irrelevant” things like machine speed, architecture, etc.

Assumption:

We are going to assume that f(n) is non-negative here. And I just want f(n) to be bounded above by g(n).

Example:

We have seen a bunch of examples, but something like 2n^2=O(n^3) defined.

And roughly this means if you drop leading constants and low order terms then this is less than or equal to that.

Range:

So, big O corresponds roughly to less than or equal to.

Usage:

Used to find the upper bound of complexity or execution time.

big O is great for expressing upper bounds.

Represent:

Subset function.

Notation Nature:

Asymmetric

Big-O Taxonomy:

Growth Rate Name Notes
     
     
     
     
     
     
     
     
     

big Omega Notation

Notation:

f(n)= big Omega[g(n)]

How to Read:

You should read equals like "is". Is means that everything over here is in over there.

Definition / Explanation:

Ω(g(n))=∑f(n):

There exits constants c>0,n0>0

Such that 0<=cg(n)<=f(n)

For all n>=n0

 

f(n)= big Omega[g(n)] to mean f(n) is at least some constant times g(n) -- -- for sufficiently large n

Assumption:

We are going to assume that f(n) is non-negative here. And I just want g(n) to be bounded above by f(n).

Example:

root n= big Omega(lg n)  : up to constant factors root n is at least log n for sufficiently large n

Range/Corresponds To:

corresponds to greater than or equal to

Usage:

Used to express functions that are at least some quantity

Represent:

Superset function.

Notation Nature:

symmetric


big Theta Notations

we ignore constant factors and ignore lower order terms.

Notation:

f(n)= big Theta[g(n)]

How to Read:

You should read equals like "is". Is means that everything over here is in over there. Gets the common in both functions.

Definition / Explanation:

f(n)= big Theta[g(n)] to mean f(n) is common/inner set in some constant times g(n) -- -- for sufficiently large n

Assumption:

We are going to assume that f(n) is non-negative here. And I just want g(n) to be inner set by f(n).

Example:

n^2  = Theta(2n^2) : up to constant factors n^2 is equal to 2n^2 for sufficiently large n

Range/Corresponds To:

corresponds to the equal/common set

Usage:

Used to express functions that are equal /common

Represent:

Inner section function big O and big Omega.

Notation Nature :

symmetric


Time Complexity of Algorithms

 

Terminology  Called As Definition
O(1) Constant Time Time is taken for an operation, fixed number of steps
E.g. Push and Pop operations for the stack.
Enqueue and Deque operations for Queue.
O(log n) Logarithmic Time Time was taken will double with each additional element in the input data set
E.g. Binary Search
Insert and Find operations in a Binary Search tree.
Insert and Remove operations for a heap
O(n) Linear Time A number of steps proportional to the size of the tasks.
(If the size of the task increases the no of steps increases)
E.g. Finding Max/Min element in a list.
Sequential search in an unsorted list of n elements.
Traversal of a tree with n nodes.
Calculating iteratively n-factorial.
Finding iteratively the nth Fibonacci number
O(n log n) Linean Arithmetic

You are performing an O(log n) operation for each item in your input.

Also called as Log-Linear, or QuasiLinear

Most (efficient) sort algorithms are an example of this.
O(n log n) . "n log n" time
E.g. Quick Sort, Merge Sort
O(n2) Quadratic Time The number of operations is proportional to the size of the task squared.
E.g. Selection Sort of n elements.
Comparing two-dimensional arrays of size n by n.
Finding duplicates in an unsorted list of n elements
(Implemented with two nested loops)
2O(n) Exponential Time Which is common for artificial intelligence tasks and is really quite bad. Exponential-time algorithms begin to run the risk of having a decent-sized input not finish before the person wanting the result retires.
O(n!) Factorial time  

 

Run time matrix for each Big O Term:

 

Input Constant Logarithmic Linear Linear Arithmatic Quadratic Cubic
N O(1) O(log N) O(N) O(N log N) O(N2) O(N3)
1 1 1 1 1 1 1
2 1 1 2 2 4 8
4 1 2 4 8 16 64
8 1 3 8 24 64 512
16 1 4 16 64 256 4,096
1024 1 10 1,024 10,240 1,048,576 1,073,741,824