sorting runtime and recurrence

  1. An algorithm is any well-defined computational procedure that takes ____________ and produces _______________.
    some value or set of values as input, some value or set of values as output
  2. What does insertion sort do?
    Insertion sort takes a sequence of n values and reorders the input sequence either in ascending or descending order.
  3. Programmatically, what is needed to accomplish an insertion sort?
    The set of values to be sorted should be stored in an array A. Then a loop (L1) iterates through the array (starting with the 2nd element) and inserts the current element A[i] into position into the already sorted sub-array A[1 ... i-1]. Finding the correct position to put A[i] requires another loop. The outer loop goes from left to right (A[1] to A[n]) while the inner loop goes from right to left (A[i-1] to A[0], stops when it finds the right spot though).
  4. How many elements in A must be repositioned in the best case of insertion sort?
  5. How many elements in A must be repositioned in the worst case of insertion sort?
    all of the elements
  6. Look at the pseudocode for Insertion-Sort.
  7. What is a loop invariant?
    A loop invariant is a condition [among program variables] that is necessarily true immediately before and immediately after each iteration of a loop. (Note that this says nothing about its truth or falsity part way through an iteration.)
  8. What three things must be shown to be true in order to assert the correctness of an alorithm using a loop invariant?
    • 1. initialization - the loop invariant is true prior to the first iteration of the loop
    • 2. maintenance - if the loop invariant is true prior to the iteration of the loop, it remains true before the next iteration of the loop
    • 3. termination - when the loop terminates, the invariant is true for the entire output
  9. What is asymptotic analysis?
    Asymptotic analysis is the comparison of functions as inputs approach infinity.
  10. What is asymptotic efficiency?
    Asymptotic efficiency examines how the running time of an algorithm increases with the size of the input, as the size of the input increases without bound.
  11. In order to compare and conrast algorithms in terms of efficiency, we must formalize the notion of ____________.
  12. What is complexity?
    Complexity is concerned with how to calculate how long a program takes, independent of the machine it runs on.
  13. What is time complexity?
    Time complexity attempts to determine the running time of an algorithm by computing the sum of running times for each statement executed. (Note that each statement executed uses a certain number of processing cycles, and each statement is run some number of times.)
  14. Look at cost calculation for Insertion-Sort.
  15. Look at running time for Insertion-Sort.
  16. Why is the best case time complexity for Insertion-Sort linear time?
    The best case for Insertion-Sort is that the elements are already sorted which means that the while loop is never run. In this case, the algorithm is reduced to the for loop that loops through the 2nd through nth elements (which are then found to already be in the correct position).
  17. Why is the worst case time complexity for Insertion-Sort quadratic time?
    In the worst case, the array to be sorted is already in reverse order, and each element must not only be moved, but the while loop must be executed the maximum number of times every time. This means that the time required will be somewhat analagous to the curve y=x^2 where x is the number of elements to be sorted.
  18. Why do we focus on the worst case with regard to time complexity?
    The worst case is an upper bound on total execution time - this guarantees the sort will be completed in that amount of time. Sometimes timeliness is important.
  19. What exactly does theta (n) mean? How is it different from theta (n^2)?
    Theta (n) means that the program runs in linear time. The actual expected running time could be expressed by n, or it could be 108n -36. We only concern ourselves with the order of magnitude. An algorithm whose theta is theta (n^2) would be expected to take significantly more time for sufficiently large input sets than an algorithm with theta (n).
  20. What is the difference between theta notation and big O notation?
    Theta notation gives both an upper and lower bound to the time complexity of an algorithm, while big-O only gives an upper bound. Theta notation is stronger than big O notation.
  21. Why is omega notation used?
    Omega notation is used to provide an asymptotic lower bound.
  22. Insertion sort is __________, while merge sort can be described as __________.
    incremental, divide and conquer
  23. Generally, how does merge sort work?
    An array A contains the elements to be sorted. A is split roughly in half into 2 sub-arrays. This splitting is repeated until the arrays cannot be further divided (one element in each sub-array). This is the "divide" step. The "conquer" or combine step involves merging the two sorted sub-arrays to produce a single sorted subarray. This is a recursive algorithm.
  24. Look at the Merge-Sort pseudocode.
  25. Look at the Merge pseudocode.
  26. Look at the visual demonstration of Merge-Sort.
  27. What is the time complexity of Merge-Sort?
    Theta (n lg n)
  28. Why is the complexity of Merge-Sort logarithmic?
    The complexity of Merge-Sort is logarithmic because the execution cost at each level is c * n (the number of elements). Because the array is split in half each time, the height of the tree is lg n (log base 2 of n).
  29. Look at the illustration of Merge-Sort complexity.
Card Set
sorting runtime and recurrence
cs340 sorting, runtime, insertion sort, merge sort