SixSigmaGlossary

  1. Z VALUE
    a standardized value formed by subtracting the mean and then dividing this difference by the standard deviation.
  2. Z DISTRIBUTION
    see “Standardized Normal Distribution”.
  3. X-BAR CHART
    average chart
  4. X-BAR AND S CHARTS
    For variable data; control charts for the average and standard deviation (sigma) of subgroups of data.
  5. X-BAR AND R CHARTS
    For variable data; control charts for the average and range of subgroups of data.
  6. which shows a system's subsystems and lower-level products, their interrelationships, and interfaces with other systems; and a reliability block diagram, which is similar to the functional block diagram except that it is modified to emphasize those aspects influencing reliability.
  7. VARIATION
    a change in data, a characteristic, or a function that is caused by one of four factors
  8. VARIANCE
    a measure of variability in a data set or population. It is the square of the standard deviation.
  9. VARIABLES DATA
    measurement information. Control charts based on variables data include average (X-bar) chart, range (R) chart, and sample standard deviation (s) chart.
  10. used for the variable to make the prediction. The graph will show possible relationships (although two variables might appear to be related, they might not be-those who know most about the variables must make that evaluation). The scatter diagram is one of the seven tools of quality.
  11. UCL
    upper control limit . For control charts, the upper limit below which a process statistic must remain to be in control. Typically, this is 3 standard deviations above the center line.
  12. U CHART
    count per unit chart ; a control chart of the average number of defects per part in a subgroup.
  13. TYPE II ERROR
    an incorrect decision to accept something when it is unacceptable
  14. TYPE I ERROR
    an incorrect decision to reject something (such as a statistical hypothesis or a lot of products) when it is acceptable
  15. TUKEY TEST
    a statistical test to measure the difference between several mean values and tell the user which ones are statistically different from the rest.
  16. TREND CONTROL CHART
    a control chart in which the deviation of the subgroup average, X-bar, from an expected trend in the process level is used to evaluate the stability of a process
  17. TOLERANCE
    the permissible range of variation in a particular dimension of a product. Tolerances are often set by engineering requirements to ensure components will function together properly. In DOE, a measure (from 0 to 1) of the independence among independent variables.
  18. the trend of plotted values toward either control limit.
  19. TEST STATISTIC
    a single value which combines the evidence obtained from sample data. The P-value in a hypothesis test is directly related to this value.
  20. TAMPERING
    action taken to compensate for variation within the control limits of a stable system. Tampering increases rather than decreases variation, as evidenced in the funnel experiment.
  21. T TEST
    a hypothesis test of population means when small samples are involved.
  22. T DISTRIBUTION
    a symmetric, bell-shaped distribution that resembles the standardized normal (or Z) distribution, but it typically has more area in its tails than does the Z distribution. That is, it has greater variability than the Z distribution.
  23. STATISTICAL PROCESS CONTROL (SPC)
    the application of statistical techniques to control a process. Often the term "statistical quality control" is used interchangeably with "statistical process control."
  24. STATISTICAL INFERENCE
    the process of drawing conclusions about a population on the basis of statistics.
  25. STATISTIC
    a quantity calculated from a sample of observations, most often to form an estimate of some population parameter.
  26. STANDARDIZED NORMAL DISTRIBUTION
    a normal distribution or a random variable having a mean and standard deviation of 0 and 1 respectively. It is denoted by the symbol Z and is also called the Z distribution.
  27. STANDARD DEVIATION
    A measure of variability (dispersion) of observations that is the positive square root of the population variance.
  28. SPECIFICATION
    a document that states the requirements to which a given product or service must conform.
  29. SPECIFICATION LIMITS
    the bounds of acceptable values for a given product or process. They should be customer driven.
  30. SPECIAL CAUSES
    causes of variation that arise because of special circumstances. They are not an inherent part of a process. Special causes are also referred to as assignable causes. (See also "common causes.")
  31. SKEWNESS
    a measure of the symmetry of a distribution. A positive value indicates that the distribution has a greater tendency to tail to the right (positively skewed or skewed to the right), and a negative value indicates a greater tendency of the distribution to tafl to the left (negatively skewed or skewed to the left). Skewness is 0 for a normal distribution.
  32. SIX-SIGMA QUALITY
    a term used to generally indicate that a process is well within specifications, i.e., that the specification range is ±6 standard deviations. The term is usually associated with Motorola, which named one of its key operational initiatives "Six Sigma Quality."
  33. SIX SIGMA
    see “Six Sigma Quality”.
  34. SIGMA
    jQuery11240036193515603730075_1575573254742?the standard deviation of a statistical population.
  35. SIGMA QUALITY LEVEL
    a commonly used measure of process capability that represents the number of standard deviations between the center of a process and the closest specification limit.
  36. SEVEN TOOLS OF QUALITY
    tools that help organizations understand their processes in order to improve them. The tools are the cause-and-effect diagram, check sheet, control chart, flowchart, histogram, Pareto chart, and scatter diagram.
  37. SCATTER DIAGRAM
    a graphical technique to analyze the relationship between two variables. Two sets of data are plotted on a graph, with the y axis being used for the variable to be predicted and the x axis being
  38. SAMPLE
    a group of units, portion of material, or observations taken from a larger collection of units, quantity of material, or observations that serves to provide information that may be used as a basis for making a decision concerning the larger quantity.
  39. SAMPLE STANDARD DEVIATION CHART
    a control chart in which the subgroup standard deviation, s, is used to evaluate the stability of the variability within a process.
  40. SAMPLE SIZE
    the number of elements or units in a sample.
  41. S CHART
    sample standard deviation chart
  42. RUN CHART
    a basic graphical tool that charts a process over time recording either individual readings or averages over time.
  43. ROBUSTNESS
    the condition of a product or process design that remains relatively stable with a minimum of variation even though factors that influence operations or usage, such as environment and wear, are constantly changing
  44. RESIDUAL
    the difference between an observed value and a pedicted value.
  45. REPRODUCIBILITY
    the variation between individual people taking the same measurement and using the same gaging.
  46. REPLICATION
    the repetition of the set of all the treatment combinations to be compared in an experiment. Each of the repetitions is called a replicate.
  47. RELIABILITY
    the probability of a product performing its intended function under stated conditions without failure for a given period of time
  48. REGRESSION
    a statistical technique for determining the best mathematical expression describing the functional relationship between one response and one or more independent variables.
  49. RANGE CHART
    a control chart in which the subgroup range, R, is used to evaluate the stability of the variability within a process
  50. RANDOM
    Varying with no discernable pattern.
  51. RANDOM SAMPLING
    a commonly used sampling technique in which sample units are selected in such a manner that 0 combinations of n units under consideration have an equal chance of being selected as the sample.
  52. QUALITY LOSS FUNCTION
    a parabolic approximation of the quality loss that occurs when a quality characteristic deviates from its target value. The quality loss function is expressed in monetary units
  53. QUALITY FUNCTION DEPLOYMENT (QFD)
    a structured method in which customer requirements are translated into appropriate technical requirements for each stage of product development and production. The QFD process is often referred to as listening to the voice of the customer.
  54. PROCESS MAPPING
    see “Flowcharting”.
  55. PROCESS CAPABILITY
    a statistical measure of the inherent process variability for a given characteristic. The most widely accepted formula for process capability is 6?.
  56. PROCESS CAPABILITY/PERFORMANCE
    see “Cp”, “Cpk”.
  57. PROCESS CAPABILITY INDEX
    the value of the tolerance specified for the characteristic divided by the process capability. There are several types of process capability indexes, including the widely used Cpk and Cp.
  58. PROBABILITY
    a measure of the likelihood of a given event occurring. It is a measure that takes on values between 0 and 1 inclusive with 1 being the certain event and 0 meaning that there is relatively no chance at all of the event occurring. How probabilities are assigned is another matter. The relative frequency approach to assigning probabilities is one of the most common.
  59. PROBABILITY DISTRIBUTION
    the assignment of probabilities to all of the possible outcomes from an experiment. This assignment is usually portrayed by way of a table, graph, or a formula.
  60. PRE-CONTROL CHARTS
    a method of controlling a process based on the specification limits. It is used to prevent the manufacture of defective units, but does not work toward minimizing variation in the process. The area between the specifications are split into zones (green, yellow, and red) and adjustments made when a specified number of points fall in the yellow or red zones.
  61. POPULATION
    a set or collection of objects or individuals. It can also be the corresponding set of values which measure a certain characteristic of a set of objects or individuals.
  62. POISSON DISTRIBUTION
    a probability distribution for the number of occurrences per unit interval (time or space); ??= average number of occurrences per interval is the only parameter. The Poisson distribution is a good approximation of the binomial distribution for the case where n is large and p is small. ??= np.
  63. PARETO CHART
    a graphical tool for ranking causes from most significant to least significant. It is based on the Pareto principle, which was first defined by J.M. Juran in 1950. The principle, named after 19th- century economist Vilfredo Pareto, suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the causes. The Pareto chart is one of the seven tools of quality.
  64. P VALUE
    the probability of making aType I error. This value comes from the data itself. It also provides the exact level of significance of a hypothesis test.
  65. P CHART
    for attribute data; a control chart of the proportion of defective units (or fraction defective) in a subgroup. Based on the binomial distribution.
  66. OUTER ARRAY
    a Taguchi term used in parameter design to identify the combinations of noise factors to be studied in a robust designed experiment.
  67. OUT-OF-CONTROL
    a process is said to be out-of-control if it exhibits variations larger than its control limits or shows a systematic pattern of variation.
  68. ONE-AT-A TIME APPROACH
    a popular, but inefficient way to conduct a designed experiment.
  69. NP CHART
    for attribute data; a control chart of the number of defective units in a subgroup. Assumes a constant subgroup size. Based on the binomial distribution.
  70. NORMAL DISTRIBUTION
    the distribution characterized by the smooth, bell-shaped curve.
  71. NOMINAL
    for a product whose size is of concern; the desired mean value for the particular dimension, the target value.
  72. NOMINAL GROUP TECHNIQUE (NGT)
    a technique, similar to brainstorming, used by teams to generate ideas on a particular subject. Team members are asked to silently come up with as many ideas as possible, writing them down. Each member is then asked to share one idea, which is recorded. After all the ideas are recorded, they are discussed and prioritized by the group.
  73. NOISE
    unexplained variability in the response. Typically, due to variables which are not controlled.
  74. MULTIVARIATE CONTROL CHART
    a control chart for evaluating the stability of a process in terms of the levels of two or more variables or characteristics n
  75. MULTI-VARI CHART
    see “Multivariate Control Chart”.
  76. MEDIAN
    the middle value of a data set when the values are arranged in either ascending or descending order.
  77. MEAN
    the average of a set of values. We usually use x to denote a sample mean, whereby we use the Greek letter ? to denote a population mean.
  78. MEAN TIME BETWEEN FAILURES (MTBF)
    the average time interval between failures for a product for a defined unit of measure (e.g., operating hours, cycles, miles).
  79. MAIN EFFECTS PLOT
    a graphic display showing the influence a single factor has on the response when it is changed from one level to another. Often used to represent the “linear effect” associated with a factor.
  80. LOSS FUNCTION
    a technique for quantifying loss due to production deviations from target values.
  81. LCL
    lower control limit. For control charts, the limit above which the process subgroup statistics must remain when the process is in control. Typically 3 standard deviations below the center line.
  82. KURTOSIS
    a measure of the shape of a distribution. A positive value indicates that the distribution has longer tafls than the normal distribution (platykurtosis); while a negative value indicates that the distribution has shorter tails (leptokurtosis). For the normal distribution, the kurtosis is 0.
  83. KAIZEN
    a Japanese term that means gradual unending improvement by doing little things better and setting and achieving increasingly higher standards. The term was made famous by Masaaki Imai in his book Kaizen
  84. JUST-IN-TIME (JIT)
    a strategy that coordinates scheduling, inventory, and production to move away from batch mode of production in order to improve quality and reduce inventories.
  85. INTERACTION PLOT
    a graphical display showing how two factors (input variables) interact if one factor’s effect on the response is dependent upon the level of the other factor.
  86. INSPECTION
    measuring, examining, testing, or gauging one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic
  87. INNER ARRAY
    a Taguchi term used in parameter design to identify the combinations of controllable factors to be studied in a designed experiment. Also called “design array” or “design matrix”.
  88. IN-CONTROL PROCESS
    a process in which the statistical measure being evaluated is in a state of statistical control (i.e., the variations among the observed sampling results can be attributed to a constant system of chance causes). (See also "out-of-control process.")
  89. HYPOTHESIS TESTS
    a procedure whereby one of two mutually exclusive and exhaustive statements about a population parameter is concluded. Information form a sample is used to infer something about a population from which the sample was drawn.
  90. HYPOTHESIS TESTS, NULL
    the hypothesis tested in tests of significance is that there is no difference (null) between the population of the sample and specified population (or between the populations associated with each sample). The null hypothesis can never be proved true. It can, however, be shown, with specified risks of error, to be untrue; that is, a difference can be shown to exist between the populations. If it is not disproved, one may surmise that it is true. (It may be that there is insufficient power to prove the existence of a difference rather than that there is no difference; that is, the sample size may be too small. By specifying the minimum difference that one wants to detect and P, the risk of failing to detect a difference of this size, the actual sample size required, however, can be determined.)
  91. HYPOTHESIS TESTS, ALTERNATIVE
    the hypothesis that is accepted if the null hypothesis is disproved. The choice of alternative, hypothesis will determine whether "one-tail" or "two-tail' tests are appropriate.
  92. HISTOGRAM
    a graphic summary of variation in a set of data. The pictorial nature of the histogram lets people see patterns that are difficult to see in a simple table of numbers. The histogram is one of the seven tools of quality.
  93. GOODNESS-OF-FIT
    any measure of how well a set of data matches a proposed distribution. Chi-square is the most common measure for frequency distributions. Simple visual inspection of a histogram is less quantitative, but equally valid, way to determine goodness-of-fit.
  94. GAUGE REPEATABILITY AND REPRODUCIBILITY
    the evaluation of a gauging instrument's accuracy by determining whether the measurements taken with it are repeatable (i.e., there is close agreement among a number of consecutive measurements of the output for the same value of the input under the same operating conditions) and reproducible (i.e., there is close agreement among repeated measurements of the output for the same value of input made under the same operating conditions over a period of time)
  95. GAGE R & R
    see “Gauge Repeatability and Reproductibility”.
  96. FULL FACTORIAL
    all possible combinations of the factors and levels. Given k factors, all with two levels, there will be 2k runs. If all factors have 3 levels, there will be 3k runs.
  97. FREQUENCY DISTRIBUTION
    a set of all the various values that individual observations may have and the frequency of their occurrence in the sample or population.
  98. FORCE FIELD ANALYSIS
    a technique for analyzing the forces that will aid or hinder an organization in reaching an objective. An arrow pointing to an objective is drawn down the middle of a piece of paper. The factors that will aid the objective's achievement (called the driving forces) are listed on the left side of the arrow; the factors that will hinder its achievement (called the restraining forces) are listed on the right side of the arrow.
  99. FLOWCHARTING
    a graphical representation of the steps in a process. Flowcharts are drawn to better understand processes. The flowchart is one of the seven tools of quality.
  100. FISHBONE DIAGRAM
    see "cause-and-effect diagram"
  101. FAILURE MODE EFFECT ANALYSIS (FMEA)
    a procedure in which each potential failure mode in every sub- item of an item is analyzed to determine its effect on other sub-items and on the required function of the item.
  102. FACTORIAL EXPERIMENTS
    experiments in which all possible treatment combinations formed from two or more factors, each being studied at two or more versions (levels), are examined so that interactions (differential effects) as well as main effects can be estimated.
  103. FACTOR
    an assignable cause which may affect the responses (test results) and of which different versions (levels) are included in the experiment.
  104. F STATISTIC
    a test statistic used to compare the variance from two normal populations.
  105. F DISTRIBUTION
    distribution of F-statistics.
  106. EXPONENTIAL DISTRIBUTION
    a probability distribution mathematically described by an exponential function. Used to describe the probability that a product survives a length of time t in service under the assumption that the probability of a product failing in any small time interval is independent of time.
  107. EXPERIMENTAL DESIGN
    a formal plan that details the specifics for conducting an experiment, such as which responses, factors, levels, blocks, treatments, and tools are to be used
  108. DISTRIBUTIONS
    see “Probability Distribution”.
  109. DESIGN OF EXPERIMENTS (DOE)
    a branch of applied statistics dealing with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters
  110. DEMERIT CHART
    a control chart for evaluating a process in terms of a demerit (or quality score), i.e., a weighted sum of counts of various classified nonconforinities
  111. DEGREES OF FREEDOM
    a parameter in the t, F, and x2 distributions. It is a measure of the amount of information available for estimating the population variance, ?????It is the number of independent observations minus the number of parameters estimated.
  112. DEFECT
    a product's or service's nonfulfillment of an intended requirement or reasonable expectation for use, including safety considerations. There are four classes of defects
  113. DECISION MATRIX
    a matrix used by teams to evaluate problems or possible solutions. After a matrix is drawn to evaluate possible solutions, for example, the team lists them in the far-left vertical column. Next, the team selects criteria to rate the possible solutions, writing them across the top row. Third, each possible solution is rated on a scale of I to 5 for each criterion and the rating recorded in the corresponding grid. Finally, the coatings of all the criteria for each possible solution are added to determine its total score. The total score is then added to help decide which solution deserves the most attention.
  114. D CHART
    demerit chart
  115. CUMULATIVE SUM CONTROL CHART (CUSUM)
    a control chart which plots the cumulative deviation of each subgroup’s average from the nominal value. If the process consistently produces parts near the nominal, the CuSum chart shows a line which is essentially horizontal. If the process begins to shift, the line will show an upward or downward trend. The CuSum chart is sensitive to small shifts in process average.
  116. Cpk
    during process capability studies, Cpk is an index used to compare the natural tolerance of a process within the specification limits. Cpk has a value equal to Cp if the process is centered on the nominal; if Cpk is negative, the process mean is outside of the specification limits; if Cpk is between 0 and 1 then the natural tolerances of the process falls outside the spec limits. If Cpk is larger than 1, the natural tolerances fall completely within the spec limits. A value of 1.33 or greater is usually desired.
  117. Cp
    during process capability studies, Cp is a cpability index which shows the process capability potential but does not consider how centered the process is. Cp may range from 0 to infinity with a large value indicating greater potential capability. A value of 1.33 or greater is usually desired.
  118. COUNT-PER-UNIT CHART
    a control chart for evaluating the stability of a process in terms of the average count of events of a given classification per unit occurring in a sample.
  119. COUNT CHART
    a control chart for evaluating the stability of a process in terms of the count of events of a given classification occurring in a sample.
  120. COST OF QUALITY
    a term coined by Philip Crosby referring to the cost of poor quality
  121. COST OF POOR QUALITY
    the costs associated with providing poor-quality products or services. There are four categories of costs
  122. CORRELATION COEFFICIENT
    (R) a number between -1 and 1 that indicates the degree of linear relationship between two sets of numbers.
  123. CORRECTIVE ACTION
    the implementation of solutions resulting in the reduction or elimination of an identified problem.
  124. CONTROL
    a process is said to be in a state of statistical control if the process exhibits only random variation (as opposed to systematic variation and/or variation with known sources). When monitoring control with control charts, a state of control is exhibited when all points remain between set control limits without any abnormal (non-random) patterns.
  125. CONTROL LIMITS
    upper and lower bounds in a control chart that are determined by the process itself. They can be used to detect special causes of variation. They are usually set a +/-3 standard deviations from the center line.
  126. CONTROL CHART
    a chart with upper and lower control limits on which values of some statistical -measure for a series of samples or subgroups are plotted. The chart frequently shows a central line to help detect
  127. CONTINUOUS IMPROVEMENT
    the ongoing improvement of products, services, or processes through incremental and breakthrough improvements.
  128. CONFORMANCE
    an affirmative indication or judgment that a product or service has met the requirements of a relevant specification, contract, or regulation.
  129. CONFIDENCE LIMITS
    the end points of the interval about the sample statistic that is believed, with a specified confidence coefficient, to include the population parameter.
  130. CONFIDENCE INTERVAL
    range within which a parameter of a population (e.g., mean, standard deviation, etc.) may be expected to fall, on the basis of a measurement, with some specified confidence level or confidence coefficient.
  131. COMPANY CULTURE
    a system of values, beliefs, and behaviors inherent in a company. To optimize business performance, top management must define and create the necessary culture.
  132. COMMON CAUSES
    causes of variation that are inherent in a process over time. They affect every outcome of the process and everyone working in the process. (See also "special causes.")
  133. COEFFICIENT OF VARIATION
    a measure of relative dispersion that is the standard deviation divided by the mean and multiplied by 100 to give a percent- age value. This measure cannot be used when the data take both negative and positive values or when it has been coded in such a way that the value X = 0 does not coincide with the origin.
  134. COEFFICIENT OF DETERMINATION
    (R2); the square of the sample correlation coefficient, a measure of the part of variable that can be explained by its linear relationship with a variable; it represents the strength of a model. (1 – R2) *100% is the percentage of noise in the data not accounted for by the model.
  135. CHI-SQUARE
    the test statistic used when testing the null hypothesis of independence in a contingency table or when testing the null hypothesis of a set of data following a prescribed distribution.
  136. CHI-SQUARE DISTRIBUTION
    the distribution of chi-square statistics.
  137. CHECKLIST
    a tool used to ensure that all important steps or actions in an operation have been taken. Checklists contain items that are important or relevant to an issue or situation. Checklists are often confused with check sheets and data sheets (see individual entries).
  138. CHECK SHEET
    a simple data-recording device. The check sheet is custom-designed by the user, which .flows him or her to readily interpret the results. The check sheet is one of the seven tools of quality. Check sheets are often confused with data sheets and checklists (see individual entries).
  139. CENTRAL LIMIT THEOREM
    if samples of size n are drawn from a population and the values of x are calculated for each sample, the shape of the distribution is found to approach a normal distribution for sufficiently large n. This theorem allows one to use the assumption of a normal distribution when dealing with x. “Sufficiently large” depends on the population’s distribution and what range of x is being considered; for practical purposes, the easiest approach may be to take a number of samples of a desired size and see if their means are normally distributed. If not, the sample size should be increased. This theorem is one of the most important results in all of statistics and is the heart of inferential statistics.
  140. CENTRAL COMPOSITE DESIGN
    a 3-level design that starts with a 2-level fractional factorial and some center points. If needed, axial points can be tested to complete quadratic terms. Used typically for quantitative factors and designed to estimate all linear effects plus desired quadratics and 2-way interactions.
  141. CAUSE-AND-EFFECT DIAGRAM
    a tool for analyzing process dispersion. It is also referred to as the Ishikawa diagram, because Kaoru Ishikawa developed it, and the fishbone diagram, because the complete diagram resembles a fish skeleton. The diagram illustrates the main causes and subcauses leading to an effect (symptom). The cause-and-effect diagram is one of the seven tools of quality.
  142. CAPABILITY
    a measure of quality for a process usually expressed as sigma capability, Cpk, or defects per million opportunities (DPMO). It is obtained by comparing the actual process with the specification limits.
  143. CALIBRATION
    the comparison of a measurement instrument or system of unverified accuracy to a measurement Instrument or system of a known accuracy to detect any variation from the required performance specification.
  144. C CHART
    count chart .
  145. BRAINSTORMING
    a technique that teams use to generate ideas on a particular subject. Each person in the team is asked to think creatively and write down as many ideas as possible. The ideas are not discussed or reviewed until after the brainstorming session.
  146. BOX-BEHNKEN DESIGN
    a 3-level design used for quantitative factors and designed to estimate all linear, quadratic, and 2-way interaction effects.
  147. BLOCK DIAGRAM
    a diagram that shows the operation, interrelationships, and interdependencies of components in a system. Boxes, or blocks (hence the name), represent the components; connecting lines
  148. BINOMIAL DISTRIBUTION
    given that a trail can have only two possible outcomes (yes/no, pass/fail, heads/trails), of which one outcome has probability p and the other probability q = 1-p, the probability that the outcomes represented by p occurs x times in n trials is given by the binomial distribution.
  149. BIMODAL DISTRIBUTION
    a frequency distribution which has two peaks. Usually an indication of samples from two processes incorrectly analyzed as a single process.
  150. BIAS
    systematic error which leads to a difference between the average result of a population of measurements and the tru accepted value of the quantity being measured.
  151. between the blocks represent interfaces. There are two types of block diagrams
    a functional block diagram,
  152. BETA RISK
    see “Type II Error”.
  153. BENCHMARKING
    an improvement process in which a company measures its performance against that of best-in-class companies, determines how those companies achieved their performance levels, and uses the information to improve its own performance. The subjects that can be benchmarked include strategies, operations, processes, and procedures.
  154. BALANCED DESIGN
    a 2-level experimental design is balanced if each factor is run the same number of times at the high and low levels.
  155. ATTRIBUTE DATA
    go/no-go information. The control charts based on attribute data include percent chart, number of affected units chart, count chart, count-per-unit chart, quality score chart, and demerit chart.
  156. ANALYSIS OF VARIANCE (ANOVA)
    a basic statistical technique for analyzing experimental data. It subdivides the total variation of a data set into meaningful component parts associated with specific sources of variation in order to test a hypothesis on the parameters of the model or to estimate variance components. There are three models
  157. ANALYSIS OF MEANS (ANOM)
    a statistical procedure for troubleshooting industrial processes and analyzing the results of experimental designs with factors at fixed levels.
  158. ALTERNATIVE HYPOTHESIS
    the hypothesis to be accepted if the null hypothesis is rejected. It is denoted by HA.
  159. ALPHA RISK
    See “Type I Error”.
  160. ALIASING
    when two factors or interaction terms are set at identical levels throughout the entire experiment (i.e., the two columns are 100% correlated).
  161. ACCEPTANCE SAMPLING
    inspection of a sample from a lot to decide whether to accept or not accept that lot. There are two types
  162. ACCEPTANCE SAMPLING PLAN
    a specific plan that indicates the sampling sizes and the associated acceptance or non-acceptance criteria to be used. In attributes sampling, for example, there are single, double, multiple, sequential, chain, and skip-lot sampling plans. In variables sampling, there are single, double, and sequential sampling plans.
  163. ACCEPTABLE QUALITY LEVEL
    when a continuing series of lots is considered, a quality level that, for the purposes of sampling inspection, is the limit of a satisfactory process average.
  164. REPEATABILITY
    the extent to which repeated measurements of a particular object with a particular instrument produce the same value.
Author
Medman
ID
349872
Card Set
SixSigmaGlossary
Description
Glossary of Terms
Updated