Design of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Table of Contents
- What analysis measures
- Running time
- Orders of Growth
- Efficiencies
- Asymptotic Notations
- Comparing Orders of Growth
- Efficiency Classes
- Process: Analysing time efficiency of non-recursive algorithms
- Process: analysing time efficiency of recursive algorithms
What analysis measures
- time complexity/efficiency: how fast an algorithm runs
- space complexity/efficiency: amount of space needed to run an algorithm and space required for input/output
- most algorithms run longer on longer inputs, so consider efficiency as a function of input size
- when input is a single number, and is a magnitude (e.g. checking if is prime), you measure size using , the number of bits in ’s binary representation:
Running time
- counting all operations that run is usually difficult and unnecessary
- instead identify basic operation that has highest proportion of running
time and count number of times this is executed
- usually most time-consuming operation on innermost loop
- e.g. sorting: basic operation is key comparison
- arithmetic: (least time consuming) addition ~ subtraction < multiplication < division (most time consuming)
- time complexity analysis: determine number of times basic operation is executed for input size
Orders of Growth
- small : differences between algorithms are in the noise
- large : the order of growth of the time complexity dominates and differentiates between algorithms
Some functions
-
grows so slowly you would expect an algorithm with basic-operation to run practically instantaneously on inputs of all realistic size
- change of base results in multiplicative constant, so you can simply write when you are only interested in order of growth
- and are both exponential-growth functions. Algorithms requiring an exponential number of operations are practical for solving only problems of very small size
Efficiencies
Algorithm run-time can be dependent on particulars of input e.g. sequential search
Efficiency can be:
- worst-case: algorithm runs longest among all possible inputs of size
- best-case: algorithm runs fastest among all possible inputs of size
- average-case: algorithm runs on typical/random input; typically more difficult to assess and requires assumptions about input
- amortized: for cases where a single operation could be expensive, but remainder of operations
occur much better than worst-case efficiency
- amortize high cost over entire sequence
Asymptotic Notations
Notations for comparing orders of growth:
- : big-oh; order of growth
- : set of all functions with lower/same order of growth as as
- : big-omega; order of growth
- : big-theta; order of growth
e.g.
Definition: A function if s.t. :
Big O
Definition: A function if s.t. :
Definition: A function if s.t. :
Big
Theorem: If and :
Analogous assertions also hold for ,
- This implies that an algorithm comprised of two consecutively executed components has an overall efficiency determined by the part with a higher order of growth (the least efficient part)
- e.g.: check if an array has equal elements by first sorting, then checking consecutive items for equality
- part 1 may take no more than comparisons, i.e.
- part 2 may take no more than comparisons, i.e.
- overall efficiency:
Comparing Orders of Growth
- to directly compare two functions, compute the limit of their ratio:
- This could be: (: order of growth)
- This could be: (: order of growth)
- Case a, b
- Case b, c
- Case b
L’Hopital’s rule
Stirling’s Formula
For large
Efficiency Classes
Class | Name | Comments |
---|---|---|
1 | constant | very few algorithms fall in this class |
logarithmic | results from cutting problem’s size by constant factor | |
linear | scan a list of size e.g. sequential search | |
linearithmic | divide-and-conquer e.g. mergesort; quicksort | |
quadratic | two embedded loops e.g. basic sorting; matrix operations | |
cubic | three embedded loops; e.g. often used in linear algebra | |
exponential | generate all subsets of -element set | |
factorial | generate all permutations of -element set |
Process: Analysing time efficiency of non-recursive algorithms
- define parameter indicating input’s size
- identify algorithm’s basic operation (typically on innermost loop)
- check if number of times basic operation is executed is only a function of input size
- if not: worst case, average case to be considered separately
- set up sum expressing number of times the basic operation is executed
- use formulas/sum manipulation to find a closed form solution for the count or determine order of growth
Basic rules
Scalar multiplication
Addition
In particular
Triangle numbers
Geometric series
Process: analysing time efficiency of recursive algorithms
- define parameter indicating input size
- identify basic operation
- check if number of times basic operation is executed is only a function of input size
- if not: worst case, average case to be considered separately
- set up recurrence relation and initial condition corresponding to number of times basic operation is executed
- solve recurrence or ascertain order of growth of its solution
- solution of recurrence relation can be by:
- backwards substitution/telescoping method: substitution of M(n-1), M(n-2), …, and identifying the pattern
- can be helpful to build a tree of recursive calls, and count the number of nodes to get the total number of calls
Divide and Conquer
- binary/n-ary recursion is encountered when input is split into parts, e.g. binary search
- you see the term in the recurrence relation
- backwards substitution stumbles on values of that are not powers of
- to solve these, you assume and then use the smoothness rule, which implies that order of growth for gives a correct answer about order of growth For the following definitions, is a non-negative function defined for
DEFINITION: eventually non-decreasing
- eventually nondecreasing: if s.t. is non-decreasing on , i.e.
- e.g. : eventually non-decreasing
- decreasing on interval
- most functions encountered in algorithms are eventually non-decreasing
- e.g. : eventually non-decreasing
DEFINITION: smooth
is smooth if:
- eventually non-decreasing, AND
-
-
e.g. is smooth because
- fast growing functions e.g. where , are not smooth
- e.g.
THEOREM:
Let be smooth. For any fixed integer : i.e. s.t.
- corresponding assertion also holds for and
THEOREM: Smoothness rule
Let be an eventually non-decreasing function Let be a smooth function. If for values of that are powers of where , then:
- analogous results also holds for and
- allows us to expand information about order of growth established for , based on convenient subset of values (powers of ) to entire domain
THEOREM: Master Theorem
Let be an eventually non-decreasing function that satisfies the recurrence where . If where , then
- analogous results also holds for and
- helps with quick efficiency analysis of divide-and-conquer and decrease-by-constant-facotr algorithms