
Z VALUE
a standardized value formed by subtracting the mean and then dividing this difference by the standard deviation.

Z DISTRIBUTION
see “Standardized Normal Distribution”.

XBAR CHART
average chart

XBAR AND S CHARTS
For variable data; control charts for the average and standard deviation (sigma) of subgroups of data.

XBAR AND R CHARTS
For variable data; control charts for the average and range of subgroups of data.

which shows a system's subsystems and lowerlevel products, their interrelationships, and interfaces with other systems; and a reliability block diagram, which is similar to the functional block diagram except that it is modified to emphasize those aspects influencing reliability.

VARIATION
a change in data, a characteristic, or a function that is caused by one of four factors

VARIANCE
a measure of variability in a data set or population. It is the square of the standard deviation.

VARIABLES DATA
measurement information. Control charts based on variables data include average (Xbar) chart, range (R) chart, and sample standard deviation (s) chart.

used for the variable to make the prediction. The graph will show possible relationships (although two variables might appear to be related, they might not bethose who know most about the variables must make that evaluation). The scatter diagram is one of the seven tools of quality.

UCL
upper control limit . For control charts, the upper limit below which a process statistic must remain to be in control. Typically, this is 3 standard deviations above the center line.

U CHART
count per unit chart ; a control chart of the average number of defects per part in a subgroup.

TYPE II ERROR
an incorrect decision to accept something when it is unacceptable

TYPE I ERROR
an incorrect decision to reject something (such as a statistical hypothesis or a lot of products) when it is acceptable

TUKEY TEST
a statistical test to measure the difference between several mean values and tell the user which ones are statistically different from the rest.

TREND CONTROL CHART
a control chart in which the deviation of the subgroup average, Xbar, from an expected trend in the process level is used to evaluate the stability of a process

TOLERANCE
the permissible range of variation in a particular dimension of a product. Tolerances are often set by engineering requirements to ensure components will function together properly. In DOE, a measure (from 0 to 1) of the independence among independent variables.

the trend of plotted values toward either control limit.

TEST STATISTIC
a single value which combines the evidence obtained from sample data. The Pvalue in a hypothesis test is directly related to this value.

TAMPERING
action taken to compensate for variation within the control limits of a stable system. Tampering increases rather than decreases variation, as evidenced in the funnel experiment.

T TEST
a hypothesis test of population means when small samples are involved.

T DISTRIBUTION
a symmetric, bellshaped distribution that resembles the standardized normal (or Z) distribution, but it typically has more area in its tails than does the Z distribution. That is, it has greater variability than the Z distribution.

STATISTICAL PROCESS CONTROL (SPC)
the application of statistical techniques to control a process. Often the term "statistical quality control" is used interchangeably with "statistical process control."

STATISTICAL INFERENCE
the process of drawing conclusions about a population on the basis of statistics.

STATISTIC
a quantity calculated from a sample of observations, most often to form an estimate of some population parameter.

STANDARDIZED NORMAL DISTRIBUTION
a normal distribution or a random variable having a mean and standard deviation of 0 and 1 respectively. It is denoted by the symbol Z and is also called the Z distribution.

STANDARD DEVIATION
A measure of variability (dispersion) of observations that is the positive square root of the population variance.

SPECIFICATION
a document that states the requirements to which a given product or service must conform.

SPECIFICATION LIMITS
the bounds of acceptable values for a given product or process. They should be customer driven.

SPECIAL CAUSES
causes of variation that arise because of special circumstances. They are not an inherent part of a process. Special causes are also referred to as assignable causes. (See also "common causes.")

SKEWNESS
a measure of the symmetry of a distribution. A positive value indicates that the distribution has a greater tendency to tail to the right (positively skewed or skewed to the right), and a negative value indicates a greater tendency of the distribution to tafl to the left (negatively skewed or skewed to the left). Skewness is 0 for a normal distribution.

SIXSIGMA QUALITY
a term used to generally indicate that a process is well within specifications, i.e., that the specification range is ±6 standard deviations. The term is usually associated with Motorola, which named one of its key operational initiatives "Six Sigma Quality."

SIX SIGMA
see “Six Sigma Quality”.

SIGMA
jQuery11240036193515603730075_1575573254742?the standard deviation of a statistical population.

SIGMA QUALITY LEVEL
a commonly used measure of process capability that represents the number of standard deviations between the center of a process and the closest specification limit.

SEVEN TOOLS OF QUALITY
tools that help organizations understand their processes in order to improve them. The tools are the causeandeffect diagram, check sheet, control chart, flowchart, histogram, Pareto chart, and scatter diagram.

SCATTER DIAGRAM
a graphical technique to analyze the relationship between two variables. Two sets of data are plotted on a graph, with the y axis being used for the variable to be predicted and the x axis being

SAMPLE
a group of units, portion of material, or observations taken from a larger collection of units, quantity of material, or observations that serves to provide information that may be used as a basis for making a decision concerning the larger quantity.

SAMPLE STANDARD DEVIATION CHART
a control chart in which the subgroup standard deviation, s, is used to evaluate the stability of the variability within a process.

SAMPLE SIZE
the number of elements or units in a sample.

S CHART
sample standard deviation chart

RUN CHART
a basic graphical tool that charts a process over time recording either individual readings or averages over time.

ROBUSTNESS
the condition of a product or process design that remains relatively stable with a minimum of variation even though factors that influence operations or usage, such as environment and wear, are constantly changing

RESIDUAL
the difference between an observed value and a pedicted value.

REPRODUCIBILITY
the variation between individual people taking the same measurement and using the same gaging.

REPLICATION
the repetition of the set of all the treatment combinations to be compared in an experiment. Each of the repetitions is called a replicate.

RELIABILITY
the probability of a product performing its intended function under stated conditions without failure for a given period of time

REGRESSION
a statistical technique for determining the best mathematical expression describing the functional relationship between one response and one or more independent variables.

RANGE CHART
a control chart in which the subgroup range, R, is used to evaluate the stability of the variability within a process

RANDOM
Varying with no discernable pattern.

RANDOM SAMPLING
a commonly used sampling technique in which sample units are selected in such a manner that 0 combinations of n units under consideration have an equal chance of being selected as the sample.

QUALITY LOSS FUNCTION
a parabolic approximation of the quality loss that occurs when a quality characteristic deviates from its target value. The quality loss function is expressed in monetary units

QUALITY FUNCTION DEPLOYMENT (QFD)
a structured method in which customer requirements are translated into appropriate technical requirements for each stage of product development and production. The QFD process is often referred to as listening to the voice of the customer.

PROCESS MAPPING
see “Flowcharting”.

PROCESS CAPABILITY
a statistical measure of the inherent process variability for a given characteristic. The most widely accepted formula for process capability is 6?.

PROCESS CAPABILITY/PERFORMANCE
see “Cp”, “Cpk”.

PROCESS CAPABILITY INDEX
the value of the tolerance specified for the characteristic divided by the process capability. There are several types of process capability indexes, including the widely used Cpk and Cp.

PROBABILITY
a measure of the likelihood of a given event occurring. It is a measure that takes on values between 0 and 1 inclusive with 1 being the certain event and 0 meaning that there is relatively no chance at all of the event occurring. How probabilities are assigned is another matter. The relative frequency approach to assigning probabilities is one of the most common.

PROBABILITY DISTRIBUTION
the assignment of probabilities to all of the possible outcomes from an experiment. This assignment is usually portrayed by way of a table, graph, or a formula.

PRECONTROL CHARTS
a method of controlling a process based on the specification limits. It is used to prevent the manufacture of defective units, but does not work toward minimizing variation in the process. The area between the specifications are split into zones (green, yellow, and red) and adjustments made when a specified number of points fall in the yellow or red zones.

POPULATION
a set or collection of objects or individuals. It can also be the corresponding set of values which measure a certain characteristic of a set of objects or individuals.

POISSON DISTRIBUTION
a probability distribution for the number of occurrences per unit interval (time or space); ??= average number of occurrences per interval is the only parameter. The Poisson distribution is a good approximation of the binomial distribution for the case where n is large and p is small. ??= np.

PARETO CHART
a graphical tool for ranking causes from most significant to least significant. It is based on the Pareto principle, which was first defined by J.M. Juran in 1950. The principle, named after 19th century economist Vilfredo Pareto, suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the causes. The Pareto chart is one of the seven tools of quality.

P VALUE
the probability of making aType I error. This value comes from the data itself. It also provides the exact level of significance of a hypothesis test.

P CHART
for attribute data; a control chart of the proportion of defective units (or fraction defective) in a subgroup. Based on the binomial distribution.

OUTER ARRAY
a Taguchi term used in parameter design to identify the combinations of noise factors to be studied in a robust designed experiment.

OUTOFCONTROL
a process is said to be outofcontrol if it exhibits variations larger than its control limits or shows a systematic pattern of variation.

ONEATA TIME APPROACH
a popular, but inefficient way to conduct a designed experiment.

NP CHART
for attribute data; a control chart of the number of defective units in a subgroup. Assumes a constant subgroup size. Based on the binomial distribution.

NORMAL DISTRIBUTION
the distribution characterized by the smooth, bellshaped curve.

NOMINAL
for a product whose size is of concern; the desired mean value for the particular dimension, the target value.

NOMINAL GROUP TECHNIQUE (NGT)
a technique, similar to brainstorming, used by teams to generate ideas on a particular subject. Team members are asked to silently come up with as many ideas as possible, writing them down. Each member is then asked to share one idea, which is recorded. After all the ideas are recorded, they are discussed and prioritized by the group.

NOISE
unexplained variability in the response. Typically, due to variables which are not controlled.

MULTIVARIATE CONTROL CHART
a control chart for evaluating the stability of a process in terms of the levels of two or more variables or characteristics n

MULTIVARI CHART
see “Multivariate Control Chart”.

MEDIAN
the middle value of a data set when the values are arranged in either ascending or descending order.

MEAN
the average of a set of values. We usually use x to denote a sample mean, whereby we use the Greek letter ? to denote a population mean.

MEAN TIME BETWEEN FAILURES (MTBF)
the average time interval between failures for a product for a defined unit of measure (e.g., operating hours, cycles, miles).

MAIN EFFECTS PLOT
a graphic display showing the influence a single factor has on the response when it is changed from one level to another. Often used to represent the “linear effect” associated with a factor.

LOSS FUNCTION
a technique for quantifying loss due to production deviations from target values.

LCL
lower control limit. For control charts, the limit above which the process subgroup statistics must remain when the process is in control. Typically 3 standard deviations below the center line.

KURTOSIS
a measure of the shape of a distribution. A positive value indicates that the distribution has longer tafls than the normal distribution (platykurtosis); while a negative value indicates that the distribution has shorter tails (leptokurtosis). For the normal distribution, the kurtosis is 0.

KAIZEN
a Japanese term that means gradual unending improvement by doing little things better and setting and achieving increasingly higher standards. The term was made famous by Masaaki Imai in his book Kaizen

JUSTINTIME (JIT)
a strategy that coordinates scheduling, inventory, and production to move away from batch mode of production in order to improve quality and reduce inventories.

INTERACTION PLOT
a graphical display showing how two factors (input variables) interact if one factor’s effect on the response is dependent upon the level of the other factor.

INSPECTION
measuring, examining, testing, or gauging one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic

INNER ARRAY
a Taguchi term used in parameter design to identify the combinations of controllable factors to be studied in a designed experiment. Also called “design array” or “design matrix”.

INCONTROL PROCESS
a process in which the statistical measure being evaluated is in a state of statistical control (i.e., the variations among the observed sampling results can be attributed to a constant system of chance causes). (See also "outofcontrol process.")

HYPOTHESIS TESTS
a procedure whereby one of two mutually exclusive and exhaustive statements about a population parameter is concluded. Information form a sample is used to infer something about a population from which the sample was drawn.

HYPOTHESIS TESTS, NULL
the hypothesis tested in tests of significance is that there is no difference (null) between the population of the sample and specified population (or between the populations associated with each sample). The null hypothesis can never be proved true. It can, however, be shown, with specified risks of error, to be untrue; that is, a difference can be shown to exist between the populations. If it is not disproved, one may surmise that it is true. (It may be that there is insufficient power to prove the existence of a difference rather than that there is no difference; that is, the sample size may be too small. By specifying the minimum difference that one wants to detect and P, the risk of failing to detect a difference of this size, the actual sample size required, however, can be determined.)

HYPOTHESIS TESTS, ALTERNATIVE
the hypothesis that is accepted if the null hypothesis is disproved. The choice of alternative, hypothesis will determine whether "onetail" or "twotail' tests are appropriate.

HISTOGRAM
a graphic summary of variation in a set of data. The pictorial nature of the histogram lets people see patterns that are difficult to see in a simple table of numbers. The histogram is one of the seven tools of quality.

GOODNESSOFFIT
any measure of how well a set of data matches a proposed distribution. Chisquare is the most common measure for frequency distributions. Simple visual inspection of a histogram is less quantitative, but equally valid, way to determine goodnessoffit.

GAUGE REPEATABILITY AND REPRODUCIBILITY
the evaluation of a gauging instrument's accuracy by determining whether the measurements taken with it are repeatable (i.e., there is close agreement among a number of consecutive measurements of the output for the same value of the input under the same operating conditions) and reproducible (i.e., there is close agreement among repeated measurements of the output for the same value of input made under the same operating conditions over a period of time)

GAGE R & R
see “Gauge Repeatability and Reproductibility”.

FULL FACTORIAL
all possible combinations of the factors and levels. Given k factors, all with two levels, there will be 2k runs. If all factors have 3 levels, there will be 3k runs.

FREQUENCY DISTRIBUTION
a set of all the various values that individual observations may have and the frequency of their occurrence in the sample or population.

FORCE FIELD ANALYSIS
a technique for analyzing the forces that will aid or hinder an organization in reaching an objective. An arrow pointing to an objective is drawn down the middle of a piece of paper. The factors that will aid the objective's achievement (called the driving forces) are listed on the left side of the arrow; the factors that will hinder its achievement (called the restraining forces) are listed on the right side of the arrow.

FLOWCHARTING
a graphical representation of the steps in a process. Flowcharts are drawn to better understand processes. The flowchart is one of the seven tools of quality.

FISHBONE DIAGRAM
see "causeandeffect diagram"

FAILURE MODE EFFECT ANALYSIS (FMEA)
a procedure in which each potential failure mode in every sub item of an item is analyzed to determine its effect on other subitems and on the required function of the item.

FACTORIAL EXPERIMENTS
experiments in which all possible treatment combinations formed from two or more factors, each being studied at two or more versions (levels), are examined so that interactions (differential effects) as well as main effects can be estimated.

FACTOR
an assignable cause which may affect the responses (test results) and of which different versions (levels) are included in the experiment.

F STATISTIC
a test statistic used to compare the variance from two normal populations.

F DISTRIBUTION
distribution of Fstatistics.

EXPONENTIAL DISTRIBUTION
a probability distribution mathematically described by an exponential function. Used to describe the probability that a product survives a length of time t in service under the assumption that the probability of a product failing in any small time interval is independent of time.

EXPERIMENTAL DESIGN
a formal plan that details the specifics for conducting an experiment, such as which responses, factors, levels, blocks, treatments, and tools are to be used

DISTRIBUTIONS
see “Probability Distribution”.

DESIGN OF EXPERIMENTS (DOE)
a branch of applied statistics dealing with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors that control the value of a parameter or group of parameters

DEMERIT CHART
a control chart for evaluating a process in terms of a demerit (or quality score), i.e., a weighted sum of counts of various classified nonconforinities

DEGREES OF FREEDOM
a parameter in the t, F, and x2 distributions. It is a measure of the amount of information available for estimating the population variance, ?????It is the number of independent observations minus the number of parameters estimated.

DEFECT
a product's or service's nonfulfillment of an intended requirement or reasonable expectation for use, including safety considerations. There are four classes of defects

DECISION MATRIX
a matrix used by teams to evaluate problems or possible solutions. After a matrix is drawn to evaluate possible solutions, for example, the team lists them in the farleft vertical column. Next, the team selects criteria to rate the possible solutions, writing them across the top row. Third, each possible solution is rated on a scale of I to 5 for each criterion and the rating recorded in the corresponding grid. Finally, the coatings of all the criteria for each possible solution are added to determine its total score. The total score is then added to help decide which solution deserves the most attention.


CUMULATIVE SUM CONTROL CHART (CUSUM)
a control chart which plots the cumulative deviation of each subgroup’s average from the nominal value. If the process consistently produces parts near the nominal, the CuSum chart shows a line which is essentially horizontal. If the process begins to shift, the line will show an upward or downward trend. The CuSum chart is sensitive to small shifts in process average.

Cpk
during process capability studies, Cpk is an index used to compare the natural tolerance of a process within the specification limits. Cpk has a value equal to Cp if the process is centered on the nominal; if Cpk is negative, the process mean is outside of the specification limits; if Cpk is between 0 and 1 then the natural tolerances of the process falls outside the spec limits. If Cpk is larger than 1, the natural tolerances fall completely within the spec limits. A value of 1.33 or greater is usually desired.

Cp
during process capability studies, Cp is a cpability index which shows the process capability potential but does not consider how centered the process is. Cp may range from 0 to infinity with a large value indicating greater potential capability. A value of 1.33 or greater is usually desired.

COUNTPERUNIT CHART
a control chart for evaluating the stability of a process in terms of the average count of events of a given classification per unit occurring in a sample.

COUNT CHART
a control chart for evaluating the stability of a process in terms of the count of events of a given classification occurring in a sample.

COST OF QUALITY
a term coined by Philip Crosby referring to the cost of poor quality

COST OF POOR QUALITY
the costs associated with providing poorquality products or services. There are four categories of costs

CORRELATION COEFFICIENT
(R) a number between 1 and 1 that indicates the degree of linear relationship between two sets of numbers.

CORRECTIVE ACTION
the implementation of solutions resulting in the reduction or elimination of an identified problem.

CONTROL
a process is said to be in a state of statistical control if the process exhibits only random variation (as opposed to systematic variation and/or variation with known sources). When monitoring control with control charts, a state of control is exhibited when all points remain between set control limits without any abnormal (nonrandom) patterns.

CONTROL LIMITS
upper and lower bounds in a control chart that are determined by the process itself. They can be used to detect special causes of variation. They are usually set a +/3 standard deviations from the center line.

CONTROL CHART
a chart with upper and lower control limits on which values of some statistical measure for a series of samples or subgroups are plotted. The chart frequently shows a central line to help detect

CONTINUOUS IMPROVEMENT
the ongoing improvement of products, services, or processes through incremental and breakthrough improvements.

CONFORMANCE
an affirmative indication or judgment that a product or service has met the requirements of a relevant specification, contract, or regulation.

CONFIDENCE LIMITS
the end points of the interval about the sample statistic that is believed, with a specified confidence coefficient, to include the population parameter.

CONFIDENCE INTERVAL
range within which a parameter of a population (e.g., mean, standard deviation, etc.) may be expected to fall, on the basis of a measurement, with some specified confidence level or confidence coefficient.

COMPANY CULTURE
a system of values, beliefs, and behaviors inherent in a company. To optimize business performance, top management must define and create the necessary culture.

COMMON CAUSES
causes of variation that are inherent in a process over time. They affect every outcome of the process and everyone working in the process. (See also "special causes.")

COEFFICIENT OF VARIATION
a measure of relative dispersion that is the standard deviation divided by the mean and multiplied by 100 to give a percent age value. This measure cannot be used when the data take both negative and positive values or when it has been coded in such a way that the value X = 0 does not coincide with the origin.

COEFFICIENT OF DETERMINATION
(R2); the square of the sample correlation coefficient, a measure of the part of variable that can be explained by its linear relationship with a variable; it represents the strength of a model. (1 – R2) *100% is the percentage of noise in the data not accounted for by the model.

CHISQUARE
the test statistic used when testing the null hypothesis of independence in a contingency table or when testing the null hypothesis of a set of data following a prescribed distribution.

CHISQUARE DISTRIBUTION
the distribution of chisquare statistics.

CHECKLIST
a tool used to ensure that all important steps or actions in an operation have been taken. Checklists contain items that are important or relevant to an issue or situation. Checklists are often confused with check sheets and data sheets (see individual entries).

CHECK SHEET
a simple datarecording device. The check sheet is customdesigned by the user, which .flows him or her to readily interpret the results. The check sheet is one of the seven tools of quality. Check sheets are often confused with data sheets and checklists (see individual entries).

CENTRAL LIMIT THEOREM
if samples of size n are drawn from a population and the values of x are calculated for each sample, the shape of the distribution is found to approach a normal distribution for sufficiently large n. This theorem allows one to use the assumption of a normal distribution when dealing with x. “Sufficiently large” depends on the population’s distribution and what range of x is being considered; for practical purposes, the easiest approach may be to take a number of samples of a desired size and see if their means are normally distributed. If not, the sample size should be increased. This theorem is one of the most important results in all of statistics and is the heart of inferential statistics.

CENTRAL COMPOSITE DESIGN
a 3level design that starts with a 2level fractional factorial and some center points. If needed, axial points can be tested to complete quadratic terms. Used typically for quantitative factors and designed to estimate all linear effects plus desired quadratics and 2way interactions.

CAUSEANDEFFECT DIAGRAM
a tool for analyzing process dispersion. It is also referred to as the Ishikawa diagram, because Kaoru Ishikawa developed it, and the fishbone diagram, because the complete diagram resembles a fish skeleton. The diagram illustrates the main causes and subcauses leading to an effect (symptom). The causeandeffect diagram is one of the seven tools of quality.

CAPABILITY
a measure of quality for a process usually expressed as sigma capability, Cpk, or defects per million opportunities (DPMO). It is obtained by comparing the actual process with the specification limits.

CALIBRATION
the comparison of a measurement instrument or system of unverified accuracy to a measurement Instrument or system of a known accuracy to detect any variation from the required performance specification.


BRAINSTORMING
a technique that teams use to generate ideas on a particular subject. Each person in the team is asked to think creatively and write down as many ideas as possible. The ideas are not discussed or reviewed until after the brainstorming session.

BOXBEHNKEN DESIGN
a 3level design used for quantitative factors and designed to estimate all linear, quadratic, and 2way interaction effects.

BLOCK DIAGRAM
a diagram that shows the operation, interrelationships, and interdependencies of components in a system. Boxes, or blocks (hence the name), represent the components; connecting lines

BINOMIAL DISTRIBUTION
given that a trail can have only two possible outcomes (yes/no, pass/fail, heads/trails), of which one outcome has probability p and the other probability q = 1p, the probability that the outcomes represented by p occurs x times in n trials is given by the binomial distribution.

BIMODAL DISTRIBUTION
a frequency distribution which has two peaks. Usually an indication of samples from two processes incorrectly analyzed as a single process.

BIAS
systematic error which leads to a difference between the average result of a population of measurements and the tru accepted value of the quantity being measured.

between the blocks represent interfaces. There are two types of block diagrams
a functional block diagram,

BETA RISK
see “Type II Error”.

BENCHMARKING
an improvement process in which a company measures its performance against that of bestinclass companies, determines how those companies achieved their performance levels, and uses the information to improve its own performance. The subjects that can be benchmarked include strategies, operations, processes, and procedures.

BALANCED DESIGN
a 2level experimental design is balanced if each factor is run the same number of times at the high and low levels.

ATTRIBUTE DATA
go/nogo information. The control charts based on attribute data include percent chart, number of affected units chart, count chart, countperunit chart, quality score chart, and demerit chart.

ANALYSIS OF VARIANCE (ANOVA)
a basic statistical technique for analyzing experimental data. It subdivides the total variation of a data set into meaningful component parts associated with specific sources of variation in order to test a hypothesis on the parameters of the model or to estimate variance components. There are three models

ANALYSIS OF MEANS (ANOM)
a statistical procedure for troubleshooting industrial processes and analyzing the results of experimental designs with factors at fixed levels.

ALTERNATIVE HYPOTHESIS
the hypothesis to be accepted if the null hypothesis is rejected. It is denoted by HA.

ALPHA RISK
See “Type I Error”.

ALIASING
when two factors or interaction terms are set at identical levels throughout the entire experiment (i.e., the two columns are 100% correlated).

ACCEPTANCE SAMPLING
inspection of a sample from a lot to decide whether to accept or not accept that lot. There are two types

ACCEPTANCE SAMPLING PLAN
a specific plan that indicates the sampling sizes and the associated acceptance or nonacceptance criteria to be used. In attributes sampling, for example, there are single, double, multiple, sequential, chain, and skiplot sampling plans. In variables sampling, there are single, double, and sequential sampling plans.

ACCEPTABLE QUALITY LEVEL
when a continuing series of lots is considered, a quality level that, for the purposes of sampling inspection, is the limit of a satisfactory process average.

REPEATABILITY
the extent to which repeated measurements of a particular object with a particular instrument produce the same value.

