
An algorithm is any welldefined computational procedure that takes ____________ and produces _______________.
some value or set of values as input, some value or set of values as output

What does insertion sort do?
Insertion sort takes a sequence of n values and reorders the input sequence either in ascending or descending order.

Programmatically, what is needed to accomplish an insertion sort?
The set of values to be sorted should be stored in an array A. Then a loop (L1) iterates through the array (starting with the 2nd element) and inserts the current element A[i] into position into the already sorted subarray A[1 ... i1]. Finding the correct position to put A[i] requires another loop. The outer loop goes from left to right (A[1] to A[n]) while the inner loop goes from right to left (A[i1] to A[0], stops when it finds the right spot though).

How many elements in A must be repositioned in the best case of insertion sort?
none

How many elements in A must be repositioned in the worst case of insertion sort?
all of the elements

Look at the pseudocode for InsertionSort.

What is a loop invariant?
A loop invariant is a condition [among program variables] that is necessarily true immediately before and immediately after each iteration of a loop. (Note that this says nothing about its truth or falsity part way through an iteration.)

What three things must be shown to be true in order to assert the correctness of an alorithm using a loop invariant?
 1. initialization  the loop invariant is true prior to the first iteration of the loop
 2. maintenance  if the loop invariant is true prior to the iteration of the loop, it remains true before the next iteration of the loop
 3. termination  when the loop terminates, the invariant is true for the entire output

What is asymptotic analysis?
Asymptotic analysis is the comparison of functions as inputs approach infinity.

What is asymptotic efficiency?
Asymptotic efficiency examines how the running time of an algorithm increases with the size of the input, as the size of the input increases without bound.

In order to compare and conrast algorithms in terms of efficiency, we must formalize the notion of ____________.
complexity

What is complexity?
Complexity is concerned with how to calculate how long a program takes, independent of the machine it runs on.

What is time complexity?
Time complexity attempts to determine the running time of an algorithm by computing the sum of running times for each statement executed. (Note that each statement executed uses a certain number of processing cycles, and each statement is run some number of times.)

Look at cost calculation for InsertionSort.

Look at running time for InsertionSort.

Why is the best case time complexity for InsertionSort linear time?
The best case for InsertionSort is that the elements are already sorted which means that the while loop is never run. In this case, the algorithm is reduced to the for loop that loops through the 2nd through nth elements (which are then found to already be in the correct position).

Why is the worst case time complexity for InsertionSort quadratic time?
In the worst case, the array to be sorted is already in reverse order, and each element must not only be moved, but the while loop must be executed the maximum number of times every time. This means that the time required will be somewhat analagous to the curve y=x^2 where x is the number of elements to be sorted.

Why do we focus on the worst case with regard to time complexity?
The worst case is an upper bound on total execution time  this guarantees the sort will be completed in that amount of time. Sometimes timeliness is important.

What exactly does theta (n) mean? How is it different from theta (n^2)?
Theta (n) means that the program runs in linear time. The actual expected running time could be expressed by n, or it could be 108n 36. We only concern ourselves with the order of magnitude. An algorithm whose theta is theta (n^2) would be expected to take significantly more time for sufficiently large input sets than an algorithm with theta (n).

What is the difference between theta notation and big O notation?
Theta notation gives both an upper and lower bound to the time complexity of an algorithm, while bigO only gives an upper bound. Theta notation is stronger than big O notation.

Why is omega notation used?
Omega notation is used to provide an asymptotic lower bound.

Insertion sort is __________, while merge sort can be described as __________.
incremental, divide and conquer

Generally, how does merge sort work?
An array A contains the elements to be sorted. A is split roughly in half into 2 subarrays. This splitting is repeated until the arrays cannot be further divided (one element in each subarray). This is the "divide" step. The "conquer" or combine step involves merging the two sorted subarrays to produce a single sorted subarray. This is a recursive algorithm.

Look at the MergeSort pseudocode.

Look at the Merge pseudocode.

Look at the visual demonstration of MergeSort.

What is the time complexity of MergeSort?
Theta (n lg n)

Why is the complexity of MergeSort logarithmic?
The complexity of MergeSort is logarithmic because the execution cost at each level is c * n (the number of elements). Because the array is split in half each time, the height of the tree is lg n (log base 2 of n).

Look at the illustration of MergeSort complexity.

