-
What is research?
- Diligent, Systematic study
- Uses Scientific method
- Ask questions, discover information
-
What is Evidence Based Practice? What is it necissary for?
Making choices that are onfirmed by sound scientific data. insuring that decisions are based on the best evidence currently available
Necissary to: determine clinical judgements, determine organization of health care, and address economic challenges
-
Knowledge transition? How much is the gap?
Translating research into daily practice. Right now there is a 20-30 year gap: "passive therapies."
-
Evidence informed practice?
Practice through research and education
-
Practice informed evidence?
Through clinical practice and clinical admin policy
-
Outcome Measure?
- A measure that has psychometric properties that enhance its ability to measure change over time in an individual or group. Useful ones are:
- Quantifiable, clinically available, practical, cost effective, and reliable & valid.
-
Disablement Model
Shows relationship between pathology, impairments, functional limitations and disability.
-
4 sources of knowledge?
- Tradition: usually not tested for validity or against better alternatives
- Authority: based on success, experience, or reputation
- Trial and Error: not as systematic
- Logical Reasoning: 2 kinds, inductive and deductive
-
Deductive and Inductive reasoning?
Deductive: Based on premises, sometimes totally on the truth of a premise
Inductive: develops generalizations from observations. The strength relies on the number of different observations that one can generalize from.
-
Scientific Method, Based on 2 assumptions?
1) Nature is orderly and regular such that events are consistent and predictable to some extent
2) events are not random but rather, have one or more causes that can be discovered
-
Characteristics of the Scientific Method?
- - Systematic: logical sequence of examination
- - Empirical: documentation of objective data through direct observation
- - Controlled
- - Critical Examination
-
Three types of research methods?
- 1) Descriptive: describes populations
- 2) Exploratory: find relationships
- 3) Experimental: cause and effect
-
Quantitative Research:
controlled, rigid experimental design, measurement and analysis of quantitative data.
-
Qualitative Research
subjective, narrative, information typically obtained under less structured or constrained conditions
-
5 steps of the clinical reasoning process?
- 1) assess
- 2) crticial reflection- analysis and interpretation of findings
- 3) relate problems to treatment goals
- 4) treat- based on best practices, participation, guidelines
- 5) reassess
-
Positivist Paradigm?
- Traditional scientific approach to research is grounded in positivism and modernism
- - emphasizes rational, logical, and scientific thinking
- - fundamental premise: "there is a real word driven by natural causes that can be studied and known"
- - objective, blocks out bias
- - deductive proccess
- - fixed research designs that are tightly controlled
- - goal is to seek generalizations
**Examine various parts to understand the whole phenomenon-- can predict future occurences.
-
The Naturalistic Paradigm?
- Grounded in naturalism and post modernism (is a social phenomenon in response to positivism)
- - emphasizes deconstruction and reconstruction of ideas and structures
- - reality is multiple, subjective, and mentally constructed
- - findings are the creation of the interactive proccess
- - subjectivity is inevitable and desired
- - inductive proccess
- - research design is flexible, emphasis on analyzing rich narrative information for relevant patterns
Goal: examine the whole phenomenon to envision how the various parts relate to each other-- makes no attempt to predict the futre.
-
Define: Phenomenon, Concept, and Constructs
Phenomenon: Any proccess known through the senses rather than by intuition or reasoning, any observable occurence
Concept: An abstract or general idea infered or derived from specific instances i.e. good health, pain, emotional disturbance, etc.
** Phenomenon and concept are often used interchangeably
Construct: concepts that represent non-observable phenomenon, abstractions that are deliberately and systematically invented or constructed by researchers for a specific purpose. i.e. self care, models of health maintenance.
-
Relationship between concepts and theories?
Concepts are the building blocks of theories, theories usually contain more than one concept knitted together
-
How is research usually carried out in quantitative studies?
Researcher--> Theory--> Prediction (based on theory)--> carry out experiment--> either reject, modify, or lend credence to theory.
** Deductive Reasoning!!
-
How is research usually carried out in qualitative studies?
Researcher--> Question-->Participant interviews--> Theory--> Conceptualize, seek patterns and relationships
Arrive at a theory that explains phenomena as it occurs, do not preconceive a theory
** Inductive Reasoning!!
-
What are the characteristics of theories?
Systematic, abstract explanation of some aspect of reality, should be rational, testable, efficient explanation and fluid
-
Testing of theory?
- - not possible, rather the relationships of the observations that the theory describes are tested.
- - used to determine whether observations are consistent to support or not support the theory
- - a law is a theory that has reached absolute consistency
-
Deductive Reasoning?
- Process of developng specific predictions from general principles
- -- a deductive proccess is waht naturally flows from preceding premises, but if any premise is not ture, the deduction will not be valid
-
Inductive Reasoning?
- Proccess of developing generalizations from specific observations
- -- it begins with experience and results in generalizations taht are probably true, similar to "common Sense"
-
Model of Research Proccess- 5 steps
- 1) Identify the research question
- - identify the problem
- - review literature
- - identify variables/theoretical framework
- - state hypothesis
2) Design the study (Design protocol, and chooes a sample)
3) Methods (collect and reduce data)
4) Data analysis (analyze data and interperet findings)
5) Communication (report findings and suggestions for further study)
-
Independent Variable:
The variable or intervention that will determine outcome
-
Dependent Variable:
The variable that is determined by the intervention
-
Extraneous Variable:
The variable not directly related to the purpose ofthe study, but one that could affect the outcomes or dependent variables.
-
Research Objectives? 4 types?
Must specifically and concisely delineate what the study is expected to accomplish
- 4 Types:
- 1) Evaluate measuring instruments
- 2) Describe populations or clinical phenomena
- 3) To explore relationships- cause/effect, associations
- 4) To make comparisons
-
Hypothesis:
Predictive statement about how the independent variable will affect the dependent variable
-
Qualitative Inquiry:
- - Explores the experiences of people in their everyday lives
- - Describe a phenomenon in which very little is known
- - Reveal the meaning behind the numbers
- - Rigorous Inquiry
-
Design Components Between Qualitative and Quantitative:
- Quantitative:
- - Precise measurment and comparisons
- - relationships btw variables
- - inferences from samples to populations
- Qualitative:
- - Meaning, context process
- - discovering unanticipated outcomes
- - understanding single cases
- - inductive theory development
-
3 Different Reasearch Designs, Descriptive:
1) Descriptive: describe nature of existing phenomenon, often undertaken without a specific hypothesis
2) Exploratory: Systematic examination of two or more variables, may predict cause and effect, but can't determine cause and effect. Does not control/manipulate variables, instead, examines relationships btw variables
3) Experimental Designs: Must have an independent Variable manipulated by experimentor
-
What must an experimental design have?
- 1) Independent variable manipulated by experimentor
- 2) Randomly assigned groups
- 3) Control group
- 4) Should control for extraneous variables
- 5) Should have blinding (either single or double)
-
Operational Definition:
Specific way of desribing something for your study
-
Validity of Quantitative Experiments: ask 4 questions
** First 3 address internal validity:"did the treatment cause the observed change, or was there another reason for the change?"
- 1)Is there statistical significance between independent and dependent variables?
- 2)If yes, is there evidence that the independent variable causes the dependent variable?
- 3)If yes, then to what extent can the experiment be generalized to the general population?
- Forth question addresses External Validity:
- 4) Can the results be generalized to those outside the experimental design?
- ** when factors are very controlled in a study, external validity is threatened
- ** more environmental factors that are controlled, the less applicable the research is to clinical settings... but more INTERNALLY valid!
-
Qualitative Research Q's:
- -The question drives the methodology
- -less specific than in quantitative, but still must be clearly formulated
- -open ended
- -can be a moving goal post
-
Statistical conclusion validity:
refers to the appropriate use of statistical procedures for analysing data
-
construct validity
refers to the theoretical conceptualization of the independent and dependent variables
-
DESCRIPTORS OF RESEARCH DESIGNS
- Retrospective and ProspectiveResearch
- Retrospective – examination of data collected in the past.
- cannot control the reliability of data collection
- cannot operationally define variables of interest.
- can provide a rich source of information
- Prospective
- variables are operationally defined before the data is collected.
- data is collected for the purpose of the study so usually more “reliable” and “valid”.
- disadvantages are resources required i.e. expense, manpower, time
-
Three main types of Designs
- Descriptive
- case studies/reports
- description of interesting, new and unique case.
- usually focuses on details of condition and/or treatment.
- thorough analysis of a single subject may reveal relationships that were not obvious
- from routine clinical care.
- very practical but lacks generalizability
- Exploratory- Cross-sectional
- correlation
- examines the extent that one variable directly or indirectly relates to another variable.
- does not establish cause and effect
- limitation and complexity of clinical correlations -
- statistically significant correlation may imply clinical relationship or the two correlated
- variables may be related to a third variable not yet identified.
- True Experimental designs
- Pretest-Posttest Control Group Design
- two or more groups have random assignment to different treatments and one group is a control
- group. Only the treatment/control differs between groups
- testing occurs before and after the treatment interventions.
- considered the “scientific standard” for cause and effect
- eg. randomized control designs
- non-controlled - another variation is treatment compared to standard treatment when
- withholding treatment is unethical.
- Post-test only – eg determine if length of stay in hospital is shorter when rehab begins on day 3
- versus day 7 after an elective hip or knee arthroplasty.
- Repeated measures designs ( – person is tested repeatedly and acts as their own control. Could be
- confounding problem of learning effect on test outcomes. Could incorporate cross-over design when subject is tested before and after a control period in addition to before and after treatment
- intervention.
- Single subject designs Chapter 12
- examines one person or a small group of subjects to examine comparisons of several
- treatments, a single treatment or components of treatment.
- there are several different types of single subject designs.
- analysis can be by visual analysis of trends or statistical analysis. An example of statistical
- analysis is the two standard deviation band method
-
Strengths and Challenges of Single Subject Design
- Strengths of Single Subject Design Chapter 12
- group data does not often allow researcher nor reader to differentiate between characteristics of those who
- respond to treatment and those who do not.
- from practical perspective – less demanding than RCT because
- o fewer numbers
- o controlling extraneous factors for one individual or small group may be less rigorous than controlling
- factors for several groups.
- Challenge of Single subject design lack of generalizability – poor external validity
- treatment can be effective on subjects with similar characteristics.
-
Survey Research?
- cover the full spectrum of quantitative research: descriptiveexploratoryexperimental
- series of questions posed to subjects in either written or oral format. - often ask questions about attitudes, values, levels of knowledge, current practices or characteristics of a specific group - appear easy to construct but require considerable rigour to ensure clarity of questions, adequate reliability and validity. - can use a variety of measurement scales eg. Likert (original was 5 categories), semantic differential , visual analogue scale . - can be administered using interviews or questionnaires.
-
Difference in quantitative and qualitative designs
Quantitative:
- precise measurement and comparisons
- relationships between variables
- inferences from samples to populations
Qualitative:
- meaning, context, process
- discovering unanticipated outcomes
- understanding single cases
- inductive theory development
-
Define Ethnography, Phenomenology, and Grounded Theory
Ethnography – is the study of cultural processes that uses different ways to research, observe, and documen tpeople, or events in their daily lives. In this approach, the observer is immersed into the culture or group in order to gather data and to better understand the situations being investigated.
Phenomenology – is an approach that acknowledges that the reality of events are observed and perceived through the person’s interpretation of those events and cannot be separated from the observer’s interpretation.
Grounded Theory – is a qualitative research method that appears to operate in the reverse fashion to tradition research. This approach is used to gather data in order to formulate a theory.
-
Efficacy Vs. Effectiveness?
Efficacy: Does the treatment work in a controlled environment? (i.e. research study)
Effectiveness: Does the treatment work in the real world?
-
5 things one needs to do after formulating the research question and design?
- Determine the criteria for the sample;
- Determine the outcomes;
- Ensure the sample and outcomes match the question and design;
- Determine the data collection methods;
- Ensure Rigor.
-
Define population, sample, and sampling bias. What are the 2 types of sampling bias?
- Population – A large group of individuals who meet a specified set of criteria condition i.e., all
- individuals with osteoarthritis or all individuals with lower limb amputations.
- Sample – A subgroup of the population that the researcher is studying. The sample serves as a
- reference group for estimating the characteristics of the population.
- Sampling Bias
- Refers to the occurrence when the sample selected either over-represents or under- represents the
- population attributes that are related to the phenomenon being studied.
- Sampling biases can be unconscious or conscious.
- Eg. Conscious – When a sample is selected purposefully, e.g., when only those with specific
- condition and no co-morbidities are selected.
- Eg. Unconscious – When the researchers select those individuals who appear co-operative, or those
- individuals who volunteer to participate in the study. For example, those who can easily access the
- study testing and interventions sites.
- Conclusions drawn from such samples are not useful for describing the actual target population.
-
Define inclusion and exclusion criteria
Inclusion criteria- Refer to the primary characteristics that the subjects must have in order to be included in the study. These may be characteristics inherent to the subject, the environment, or be temporally related, depending on whether these factors are relevant to the research question.
Exclusion criteria- Refer to the characteristics that preclude an individual from being eligible for study inclusion. These are generally factors that are considered potentially confounding to the results.
-
Three types of probability sampling?
Random sample- Also known as probability sample, this refers to the selection process by which every unit in the population has an equal change or probability of being sampled.
Stratified random sampling- Refers to the sampling process in which relevant population characteristics are identified and members are partitioned into homogeneous subgroups before randomly selecting individuals from these subgroups.
Disproportional sampling- Selecting more individuals from a subgroup than the number to proportionately represent that group to ensure n is adequate eg. More male PTs. This is usually done when the strata or subgroups are of greatly unequal size and proportional stratified sampling would results in a very small number of individuals of one subgroup which would be considered inadequate to generalize conclusions to the larger population.
-
Three types of non probability sampling?
Convenience sampling- A form of non-probability sampling in which subject selection is basis of availability or convenience. A potential for self-selection bias is a limitation of this sampling method. Those who volunteer for a study may be atypical of the target population.
Quota sampling- When the researcher guides the sampling process to ensure an adequate number of subjects are obtained for each stratum, where each stratum is represented in the same proportion as in the population e.g., BMI and gender.
Purposive sampling- Researcher handpicks subjects based on specific criteria.
-
Define Power, what three things does power depend on?
- The ability to find a significant difference between the control and intervention/patient group where they exist. Sample size directly affects the statistical power of the study. With a small sample the power tends
- to be low and a study may not succeed in demonstrating the desired effects.
- Power of the study depends on:
- study design e.g. Pre-post test design, randomized control trial
- number of variables
- variability of outcome in control group, expected difference, and variability of the intervention/patient group
Calculating the power of the expected outcome is essential to determine feasibility of study.
-
Three things statistical significance is dependent on?
effect size, alpha level, and sample size
-
How to calculate effect size?
Mean difference between groups (treatment-control)/SD of control group
OR
(pre-post change of outcomes)/SD of changes
-
What is a small, medium, and large effect size?
- <0.3 = small
- 0.3-0.6 = medium
- >0.6 = large
If treatment has a large effect size, then fewer participants are needed to show statistical significance
-
Is statistical significance the same as clinical significance?
statistical significance is a mathematical calculation, and clinical significance is found through actual trial
-
What is the minimally clinically important difference?
clinically important difference is determined by patient and/or health professionals. It is based on meaningful improvement and not solely on a statistical significant improvement.
-
Definition and Principles of Measurment
- PRINCIPLES OF MEASUREMENT
- Measurement is used as a way of understanding, evaluating, and differentiating characteristics of people and objects. It provides a method for achieving a degree of precision in this understanding, to describe physical or behavioral characteristics according to their quantity, degree, capacity or quality.
Definition: The process of assigning numerals to variables to represent quantities of characteristics according to certain rules.
-
Definition of Discrete and Continuous
Discrete often refers to scales where only whole numbers can be assigned. Eg. Heart beat
Continuous can theoretically take on any value along a continuum within a defined range; in reality continuous values can never be measured exactly but are limited by the sophistication and precision of the measuring tool. Eg. height, weight
-
Define Construct
- Constructs are not simply defined by operational definitions but also demonstrate relationships to other constructs or observable phenomenon
- Velocity = distance / time
- Work = force × distance
- These constructs have little meaning except as a function of other constructs Eg. Disability is a composite of many constructs
-
Define Order, distance, and origin
- Order: One value is smaller or larger than another
- Distance: Values are a defined distance apart
- Origin: A meaningful zero
-
What are the 4 scales of measurement?
- Nominal scales- Have no order, distance, or origin. Categories are mutually exclusive.
- Eg. male/female, yes/no, disagree/agree
- Ordinal scales- Have order, i.e. one value is lesser or greater than other values but mathematical manipulation is not meaningful.
- Eg. fair, good, excellent; none, minimal, moderate, strong
- Interval scales- Have order and a meaningful distance between each value; use a real number system but have no meaningful zero. Can mathematically manipulate scores from interval scales.
- Eg. 2002 AD vs 2002 BC. The year 0 does not represent absence of years.
- 32° Fahrenheit or 0° Celsius. In both scales, 0 does not represent absence of temperature.
- Ratio scales- Have order, distance, and an origin (meaningful zero).
- Eg. Height, weight. A person 2 M tall is twice as tall as someone 1 M in height.
-
Define Measurement error and give two kinds
How true is the measure to the actual value?
- Systematic error:
- o Affects measurements the same direction
- Eg. Zero offset of a weighing scale.
- Eg. Tape measure that is incorrectly marked at the zero end.
- o Can affect one group more so than another.
- Random error
- o Error due to chance.
- o Can be larger or smaller than true value.
- o Can affect study groups equally.
- o Random values can eventually cancel each other out but may increase variability of a measure results on ‘noisy’ measurement.
-
Define Reliability
- How consistent is the measurement?
- How free is the measurement from error?
- Is the measurement reproducible?
- Is the measurement dependable?
-
What are the two sources of measurement error?
- Components of Measuring System
- The person measuring
- The instrument used to measure
- Inherent variability of the characteristic being measured. Eg. Leg length versus BP.
- Other sources
- Influence of environment affecting background noise
- Human variability
- o Daily fluctuation of condition
- o Motivation, cooperation, understanding of test and/or instructions.
-
What is the quantification of reliability?
Correlation:
- If a measurement is reliable, individual measurement within a group will maintain their position within the group on repeated measurements:
- A correlation coefficient is used to describe it.
- A value of 1 or –1 would indicate a perfect relationship
- A smaller r value closer to zero indicates poor reliability and a poorer relationship.
-
4 reasons that reliability is difficult to accurately quantify
- There are different ways to calculate correlation coefficients.
- There is no established standard about the most appropriate way to interpret a correlation coefficient.
- The correlation coefficient can be affected by the range of values i.e. a larger range will increase the r value and a smaller range will cause it to decrease.
- Most correlation coefficient do not detect systematic errors.
-
Assessing reliability requires assessment of (2 things)
correlation and agreement of scores
Interclass correlation coefficient reflects both correlation and agreement
-
Three components of reliability?
- Test-retest reliability
- o Subject’s score should be similar on multiple trials if their condition and the
- testing environment is stable
- o Testing intervals should be far enough apart to avoid fatigue but not too long otherwise the person’s condition might change.
- o Testing effects can improve performance if learning occurs or can decrease
- performance if symptoms such as fatigue, shortness of breath or pain is inflicted
- Intraobserver
- The consistency with which one rater assigns scores to single set of responses on two occasions.
- Interobserver
- The consistency of performance among different raters or judges in assigning scores to the same objects or responses.
-
What is population and site specific reliability?
Population – Reliability needs to be assessed on new populations that have not been tested.
Site – (Includes measurement tool, tester(s), and the environment)-Reliability needs to be assessed in all new environments to ensure reliability of instrument, tester and environment at the new testing site.
-
Two types of reliability designs?
- Intraobserver reliability refers to the stability of data recorded by one individual across two or more trials.
- fatigue, attention, memory
- learning
- expectation bias
- experience in identifying phenomena and/or dealing with unusual circumstances
- Interobserver refers to the variation between two or more observers in measures performed on the same group of subjects.
- knowledge base, experience
- need to do trials before and during testing
-
How to sample for reliability studies
Reliability is population specific
Sample should adequately represent the population of interest to which the findings can be generalized. - Random sampling should be used. One must not consciously select subjects or trials of tests that tend to show a higher reliability.
- The testing protocol should be conducted as closely to the study or clinical
- condition as possible.
-
Relationship between standardization and application of study to the population
Typically, the more standardized a test is, the less applicable it is for use in a general population and/or in the clinical setting.
-
Two optimization designs
- Optimization Designs
- Standardization designs-
compare the reliability of taking measurements under different conditions i.e. controlling for posture, time of day, quietness of testing atmosphere etc. - Mean designs
- compare the reliabilities of single measurements; reliabilities of measurements are averaged across several trials.
It is important to consider the reliability during the design phase and implementation phase of the study
-
Define Validity
Validity is the extent to which an instrument measures what it is intended to measure. It is concerned with the objectives of the test, and the ability to make inferences from test scores or measurements. Thus, measurement validity is the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.
Good reliability does not ensure validity; however, good reliability is an essential component of good validity.
-
What are the 4 types of validity
- Face (logical) validity
- Does the test measure what it appears to measure? Does the test make sense?
- Measurements that measure the property of interest through direct observation
- have higher face validity. Eg. ROM, strength, gait, balance.
- To establish face validity, one must be clear about the definition of the concept that is being measured.
- Content validity
- The extent to which the measure evaluates all aspects of the concept of interest
- Completeness- Does it cover all parts of the universe of content?
- Relevance- Is it free from the influence of factors that are irrelevant to the
- purpose of the measurement?
- Emphasis- Does it reflect the relative importance of each part?
- Criterion validity
- The extent to which one measure is systematically related to other measures or outcomes. The test to be validated, called the target test, is compared with a gold standard or, established criterion measure that is assumed to be valid. The relationship is quantified by means of a correlation coefficient
- Predictive validity refers to whether a test done at one point in time is predictive of
- future status.
- Concurrent validity is the extent to which the measurement compares to a gold standard. Two measures are taken at the same time.
- Construct validity
- Validity of the constructs which underlie the measures.
- Difficult to completely establish construct validity.
- Refers to the degree to which an instrument measures a particular theory or construct.
-
Define Operational Definition
An operational definition is a specific description of the way in which a construct is presented or measured within a study. An operational definition does not guarantee construct validity but enables the reader to formulate their own opinion about it
-
Three validation designs
- Content validity
- Completeness- Does it cover all aspects?
- Relevance- Are the most important relevant items measured?
- Emphasis- Is the weighting appropriate?
- Eg. Questionnaires, examination, inventories, interviews, etc.
- Construct validity
- Examination of divergence and convergence
- Factor analysis
- Hypothesis testing
- Criterion validation
- Accuracy-Comparison to a standard measure
- Concurrent-Different measures taken at the same time (concurrently) come up with similar information
- Predictive- A measurement at one point in time a valid predictor of some criterion score
-
Define sensitivity, specificity, predictive calue, positive predictive value, and negative predictive value
- Sensitivity- The ability of a positive test to correctly identify that the
- subject does have the condition.
- Eg. Subject has positive Lachman and has ACL deficiency
- Specificity – Ability of a negative test to correctly identify that the
- subject does not have the condition.
- Eg. Subject has negative Lachman and does not have ACL deficiency
Predictive Value of the test- The likelihood that the test can predict the presence or absence of the condition
- Positive Predictive Value- Percentage of the positive tests that are
- correct
- Negative Predictive Value- Percentage of the negative tests that are
- correct
-
Define prevalence and incidence
Prevalence- Number of cases, new or old, of a condition existing in a given population at any one time
Incidence – Total number of new cases reported during a year expressed per total population on July 1. It is the number of new cases in a given population in a given period of time.
For a test with a given sensitivity and specificity, the likelihood of identification of cases with the condition is increased when the prevalence is high (condition is common)
-
What is responsiveness?
Responsiveness - The ability to detect clinically important change over time.
- Considered a component of test validity.
- It is an important quality of a test used to assess the effectiveness of intervention.
- The score must change in proportion to the patient’s status change, and must remain stable when the patient status is unchanged.
-
Difference between sensitivity to change and responsiveness?
Sensitivity can be defined as the ability of an instrument to measure change in a state irrespective of whether it is relevant or meaningful, whereas
responsiveness can be viewed as the ability of an instrument to measure a meaningful important change
-
2 ways to analyze responsiveness?
Change score- Difference between the outcome and initial score
- Effect size- A standardized measure of change, typically from initial to final measurements that allows a comparison across different units
- (commonly calculated by subtracting the initial score from the final score and dividing this difference by the standard deviation of initial score)
-
What is a criterion referenced test
Tests developed to assess performance based on an absolute criterion. Scores are derived by comparing the performance of the test-taker to a pre-specified standard or "criterion". Results are interpreted relative to a standard that represent an acceptable model or level of performance.
-
What is a norm referenced Test?
A standardized testing instrument by which the test-taker's performance is interpreted in relation to the performance of a group of peers who have previously taken the same test.
- Although referred to as "norm-referenced tests," they are really norm-referenced interpretations of a score of a given test.
- Most standardized tests are norm-referenced.
|
|