Audit II ch 15

  1. attributes sampling
    involves estimating the occurrence of a specific attribute (characteristic) in the client's population of transactions for the audit period.
  2. two ways of sampling items
    • 1. probabilistic (random) sampling
    • 2. nonprobabilistic sampling
  3. probabilistic (random) sampling
    in which every item in the population has an equal chance of being selected. (unbiased, representative) - best solution in most circumstances
  4. types of probabilistic sampling
    • a. simple random sample selection
    • b. stratified sample selection
    • c. prabability proportional to size sample selection
  5. stratified sample selection
    type of probabilistic sampling that involves separating the population into strata (layers) and applying a different sampling technique to each stratum (layer)
  6. probability proportional to size sample size
    (PPS or monetary unit) which is for substantive tests
  7. Nonprobabilistic sampling
    is inherently biased as every item in the population does not have an equal chance of being selected and the auditor's judgment plays a role in selecting individual items to sample. - appropriate only in limited circumstances
  8. types of nonprobabilistic sampling
    • a. directed sample selection
    • b. block sample selection
    • c. haphazard sample selection
    • d. systematic sample selection
  9. directed sample selection
    a type of nonprobabilistic sampling in which the auditor selects items to sample based on specific characteristics. As an example of where this is appropriate, consider the auditor's selection of large and unusual items while scanning the client's books as a part of analytical procedures
  10. block sample selection
    type of nonprobabilistic sampling in which the auditor pulls out a sequence of documents to examine. This is obviously not a random sample of documents from throughout the audit period and is seldom appropriate, However, it is appropriate in examining the block of transaction occurring around year-end for proper cutoff.
  11. haphazard sample selection
    a type on nonprobabilistic sampling in which the auditor tries to take a random sample by picking "randomly" from the documents. Introduces bias because the auditor will tend to pick every nth document. is seldom appropriate.
  12. systematic sample selection
    a type of nonprobabilistic sampling in which the auditor deliberately picks every nth document. is seldom appropriate except when the auditor wants to account for a sequence fo prenumbered documents
  13. two ways of evaluating the results of the sample
    • 1. statistical
    • 2. nonstatistical
  14. statistical evaluating
    evaluation that involves quantitatively determing the estimated population error rate based on the sample error rate in a fairly objective manner
  15. nonstatistical evationating
    evaluating in which the auditor evaluates the sample judgmentally to estimate an error rate for the population. less preferable in most cases
  16. two sources of random numbers
    • 1. a random number table
    • 2. a random number generator
  17. a random number table
    This was used in the days before PCs. Note than many random numbers must be discarded because they do not match up to the prenumbers used in the audit period.
  18. a random number generator
    The auditor inputs the sample size and the first and last prenumber of the audit period. This develops the desired number of random numbers which correspond to the audit period's prenumbers. The random numbers can also be sorted in numerical order to facilitate selection of individual documents.
  19. Tolerable exception (error/deviation/misstatement/occurence) rate (TER)
    The maximum rate of error the auditor will accept in the population and still say that internal control is OK. Analogous to tolerable misstatement in materiality judgments.
  20. Estimated population exception rate (EPER)
    The error rate in the population the auditor expects before beginning sampling. It either may be based on the last audit or on a pilot sample.
  21. Precision
    TER - EPER  / is the cushion we have and is the primary determinant of sample size (precision and sample size are inversely related) Thus, if EPER > TER, we would not even sample for tests of controls, as internal control already appears to be messed up.
  22. Acceptable rist of assessing control risk too low (ARACR, a or type 1 error)
    The risk that the sample results indicate that the population error rate is less than the auditor's tolerable error rate when in fact the true population error rate is greater than the auditor's tolerable error rate.  Note we do not know the true population error rate unless we examine every item in the population.  Thus, the auditor says IC is OK when it is actually not OK.  This is obviously bad for the auditor's business risk.  (Is analogous to AAR.)
  23. Confidence level:  1 - ARACR
    This is the probability we do not have a type I error.
  24. Acceptablerisk of assessing control risk too high (ß, or type II error)
    The risk that the sample results indicate that the population error rate is greater than the auditor's tolerable error rate when in fact the true population error rate is less than the auditor's tolerable error rate.  Thus, the auditor says IC is not OK when it is actually OK.  This decreases audit efficiency, as the auditor will expand substantive tests unnecessarily.  We will not consider this risk further in this chapter.  It is a type of acceptable risk of underreliance, and chapter 17 will refer to it as acceptable risk of incorrect rejection (ARIR).
  25. Sample exception rate (SER)
    # errors found in sample divided by sample size.  Thus, is the mean error rate of our sample, which is our best estimate of the population error rate.  It is the point estimate of the sample, or the sample mean (which is our best estimate of the “true” population mean, m).
  26. Computedupper exception rate (CUER)
    • sample exception rate plus an allowance for sampling risk.  CUER is the maximum rate of error we expect in the population based on our sample results.  It is the same as the upper confidence limit (recall confidence intervals
    • from statistics).  Note we are not interested in the minimum rate of error, which we could call the computed lower exception rate with regard to IC in most circumstances.  The computed lower exception rate would be the same as the lower confidence limit.
  27. Sampling risk (sampling error)
    Therisk that our sample is not representative due to type I (ARACR) and type IIerror.  It is always present (unless weexamine every item in the population), so it does not stem from auditornegligence.  If the auditor's sampleleads him or her to reach the wrong conclusion, s/he must prove it is due tosampling risk in order to not be held liable.
  28. Nonsampling risk (nonsampling error)
    Therisk that an auditor does something wrong, such as not recognizing an error orapplying the wrong sampling technique. It stems from negligence and is thus avoidable.
  29. We cannot reduce ARACR to zero (unless we examine every item in the population) as sampling is a stochastic process.  However, the auditor will specify an ARACR as a part of the sampling plan. 
    ARACRdecreases as the auditor becomes more conservative (as with an SEC client), asthe auditor accepts less of a chance of reaching the wrong conclusion withregard to IC.
  30. ARACR is a type of acceptable risk of overreliance (ARO).  Note that the term ARO can be used for more than just IC, whereas ARACR cannot.  You will see this same risk referred to as acceptable risk of incorrect acceptance (ARIA) in chapter 17 of the textbook.  I will tend to use the term "acceptable risk of type I error" in class to avoid confusion!
    1
Author
Anonymous
ID
205235
Card Set
Audit II ch 15
Description
Audit II ch 15
Updated