-
Replicable
are you able to repeat the experiment to get the same results
-
Hypothesis
statement or expectation of what you think will happen
-
Falsifiable
you must be able to prove a theory or experiment wrong
-
Precise
psychologists ensure that they can replicate their own and others research
-
Parsimonious
the simplest most logically economical explanation
-
Operational Definition
states exactly how a variable are and how they will be measured within the context of your study. like recipe
-
Scientific Method
a standardized way of making observations, gathering data, forming theories, testing predictions, and interpreting results
-
Theory
an explanation that organizes seperate peices of information in a coherent way
-
Case Studies
in- depth examinations of a single person. strength: it can highlight individuality. weakness: there is nothing to compare the results to, researcher bias can creep in, very unlikely that this one person actually represents a large population
-
Naturalistic Observation
observe organisms in their natural setting. strength: the behavior of the subject is likely to be most accurate. weakness: researcher has no control over the setting, subjects may not have the opportunity to display the behavior the researcher is looking for. *cannot study topics like attitudes or thoughts using this method
-
Survey
study that asks a large number of people questions about their behaviors. strength: all us to gather a large portion of information, can study things that cant be studies in naturalistic obersvation (sexual behavior). weakness: subjects may not understand the language, social desirability effect.
-
Correlation Studies
looking for relationships between variables, only tells us if there is a relationship, not which variable caused the other. less control over subjects enviroment (hard to rule out alternatives)
-
Correlation Coefficient
a statistic that shows the strength of the relationship. the closer to 1 or -1, the stronger the relationship.
-
Postive Correlation
there is a direct relationship, the variables are varying in the same direction. ex: amount of study time and grades
-
Negative Correlation
the variables are inversely related. ex: as number of children increase, the IQ scores of the children decrease. hands move in opposte directions
-
Internal Validity
direct causal relationship between IV and DV, being positive that the manipulation of the IV caused the DV.
-
External Validity
generalizability of our results in the general population. expect other similar groups to react the same way.
-
Sample (selection) Bias
when random sampling is not used. ex: taking the first 30 volunteers.
-
Experimenter Bias
tendency for results to confrom to the experiments expectations. ex: treating subjects differently depending on what he/she wants from them
-
Placebo Effect
when the participants expectations about the effect of an experimental manipulation has an influence on the DV. expectations make behavior change. ex: telling someone they are drinking alcohol when they actually arent
-
Demand Characteristics
subtle bias that is produced by participants trying to be good subjects and behave in a manner that helps the experimenter.
-
Extraneous Variables
any variable not intentionally included in the research design that may affect the DV. extra variables. ex: sickness, distractions.
-
Confouding Variables
variables other then the IV that participants in one experiment may get that participants in the over experiment dont get. ex: time of day, sunlight.
-
Blinding
how to assert more control and higher experimental validity
-
Single Blinding
subjects arent aware of whether they're in the experimental group of control group.
-
Double Blinding
neither subjects nor experimental assistants measuring the DV are aware of which groupd subjects are assigned to. reduces experimenter bias and demand characteristics.
-
Counter Blinding
reducing "order" effect. testing some subjects from group A and some subjects from group B both in the morning.
-
HOPS IV DV EAP
hypothesis, operationalize your population, sample, independent and dependent variables, expose, analyze, publish
-
Within Subjects Design
subjects serve as both the experimental and control group (pre test, post test)
-
Between Subjects Design
the DV is compared between two different experimental and control groups
-
Continuous Variable
variables that do not change or can't be manipulated during the experiment. ex: gender, height.
-
Null Hypothesis
- The assumption that the IV will have no effect on the DV
- -if we notice a difference in results between the exp and control groups we reject the null and give support to the hypothesis
- -if we fail to notice a difference in results between the two groups then we fail to reject the null hypothesis and DO NOT support the hypothesis
-
Type I Error
rejecting the null hypothesis when infact it is true. ex: we reject that a pacient is sick and admit them to the hopsital for observation, later finding out that we made an error, the patient is not sick, "so sorry, you can go home now"
-
Type II Error
we fail to reject the null hypothesis and assume that it is true. ex: assuming the patient is not sick, send him/her home where they later die at home.
-
3 Main Research Tools
descriptive studies, correlation studies and experimental designs.
-
Pre-Experimental Designs
no control group, just a signle participant being studied, no comparison. ex: pre test, post test. one group is tested given the treatment and then retested
-
Quasi-Experimental Design
no control group, design doesnt include randomization
-
True Experimental Designs
control groups and random assigment of groups
-
Independent Variables
the variable in which the researcher things had an effect and would like to measure. what we think will have an effect.
-
Dependent Variable
variable that is observed or measured by the experimenter to determine the effect of the IV( must to measurable and observable) the outcome variable
-
Control
the ability of the experimenter to remove factors that might cause or affect the results even though they arent the IV
-
Sample / Sampling
the population you want to make conclusions about / how you select your participants
-
Representative Sampling
wanting your sample to be similar to the key characteristics in the questions
-
Simple Random Sample
randomly selecting a number of participants from a group (equal chance)
-
Stratified Random Sample
randomly selecting participants from different subsets of the population
-
Random Assigment
allows researcher to have control over chance variables. both groups should be relatively the same except for the esposure to the IV
-
Experimental Group
the subjects recieving the IV
-
Control Group
not exposed to the IV, the subjects the experimental group is compared to.
-
Statistical Signifiance
the results of the study are unliley to have occured simply by chance.
-
Validity
extent to which the researcher can claim that what was found was actually the result of something the researcher did. the fewer alternative explanations that can be offered the higher the validity
-
Reliability
refers to consistency and repeatability of scores of an experiment. use test and retest method.
-
Meta Analysis
taking a bunch of different studies and analyzing them as a whole
-
RVD HOPS IVDV EAP
- find a Research problem or question
- determine your Variables
- Design your experiment
- formulate Hypothesis
- Operationalize your variables
- determine the Population
Independent / Dependent variables
|
|