AP Psych

  1. Replicable
    are you able to repeat the experiment to get the same results
  2. Hypothesis
    statement or expectation of what you think will happen
  3. Falsifiable
    you must be able to prove a theory or experiment wrong
  4. Precise
    psychologists ensure that they can replicate their own and others research
  5. Parsimonious
    the simplest most logically economical explanation
  6. Operational Definition
    states exactly how a variable are and how they will be measured within the context of your study. like recipe
  7. Scientific Method
    a standardized way of making observations, gathering data, forming theories, testing predictions, and interpreting results
  8. Theory
    an explanation that organizes seperate peices of information in a coherent way
  9. Case Studies
    in- depth examinations of a single person. strength: it can highlight individuality. weakness: there is nothing to compare the results to, researcher bias can creep in, very unlikely that this one person actually represents a large population
  10. Naturalistic Observation
    observe organisms in their natural setting. strength: the behavior of the subject is likely to be most accurate. weakness: researcher has no control over the setting, subjects may not have the opportunity to display the behavior the researcher is looking for. *cannot study topics like attitudes or thoughts using this method
  11. Survey
    study that asks a large number of people questions about their behaviors. strength: all us to gather a large portion of information, can study things that cant be studies in naturalistic obersvation (sexual behavior). weakness: subjects may not understand the language, social desirability effect.
  12. Correlation Studies
    looking for relationships between variables, only tells us if there is a relationship, not which variable caused the other. less control over subjects enviroment (hard to rule out alternatives)
  13. Correlation Coefficient
    a statistic that shows the strength of the relationship. the closer to 1 or -1, the stronger the relationship.
  14. Postive Correlation
    there is a direct relationship, the variables are varying in the same direction. ex: amount of study time and grades
  15. Negative Correlation
    the variables are inversely related. ex: as number of children increase, the IQ scores of the children decrease. hands move in opposte directions
  16. Internal Validity
    direct causal relationship between IV and DV, being positive that the manipulation of the IV caused the DV.
  17. External Validity
    generalizability of our results in the general population. expect other similar groups to react the same way.
  18. Sample (selection) Bias
    when random sampling is not used. ex: taking the first 30 volunteers.
  19. Experimenter Bias
    tendency for results to confrom to the experiments expectations. ex: treating subjects differently depending on what he/she wants from them
  20. Placebo Effect
    when the participants expectations about the effect of an experimental manipulation has an influence on the DV. expectations make behavior change. ex: telling someone they are drinking alcohol when they actually arent
  21. Demand Characteristics
    subtle bias that is produced by participants trying to be good subjects and behave in a manner that helps the experimenter.
  22. Extraneous Variables
    any variable not intentionally included in the research design that may affect the DV. extra variables. ex: sickness, distractions.
  23. Confouding Variables
    variables other then the IV that participants in one experiment may get that participants in the over experiment dont get. ex: time of day, sunlight.
  24. Blinding
    how to assert more control and higher experimental validity
  25. Single Blinding
    subjects arent aware of whether they're in the experimental group of control group.
  26. Double Blinding
    neither subjects nor experimental assistants measuring the DV are aware of which groupd subjects are assigned to. reduces experimenter bias and demand characteristics.
  27. Counter Blinding
    reducing "order" effect. testing some subjects from group A and some subjects from group B both in the morning.
    hypothesis, operationalize your population, sample, independent and dependent variables, expose, analyze, publish
  29. Within Subjects Design
    subjects serve as both the experimental and control group (pre test, post test)
  30. Between Subjects Design
    the DV is compared between two different experimental and control groups
  31. Continuous Variable
    variables that do not change or can't be manipulated during the experiment. ex: gender, height.
  32. Null Hypothesis
    • The assumption that the IV will have no effect on the DV
    • -if we notice a difference in results between the exp and control groups we reject the null and give support to the hypothesis
    • -if we fail to notice a difference in results between the two groups then we fail to reject the null hypothesis and DO NOT support the hypothesis
  33. Type I Error
    rejecting the null hypothesis when infact it is true. ex: we reject that a pacient is sick and admit them to the hopsital for observation, later finding out that we made an error, the patient is not sick, "so sorry, you can go home now"
  34. Type II Error
    we fail to reject the null hypothesis and assume that it is true. ex: assuming the patient is not sick, send him/her home where they later die at home.
  35. 3 Main Research Tools
    descriptive studies, correlation studies and experimental designs.
  36. Pre-Experimental Designs
    no control group, just a signle participant being studied, no comparison. ex: pre test, post test. one group is tested given the treatment and then retested
  37. Quasi-Experimental Design
    no control group, design doesnt include randomization
  38. True Experimental Designs
    control groups and random assigment of groups
  39. Independent Variables
    the variable in which the researcher things had an effect and would like to measure. what we think will have an effect.
  40. Dependent Variable
    variable that is observed or measured by the experimenter to determine the effect of the IV( must to measurable and observable) the outcome variable
  41. Control
    the ability of the experimenter to remove factors that might cause or affect the results even though they arent the IV
  42. Sample / Sampling
    the population you want to make conclusions about / how you select your participants
  43. Representative Sampling
    wanting your sample to be similar to the key characteristics in the questions
  44. Simple Random Sample
    randomly selecting a number of participants from a group (equal chance)
  45. Stratified Random Sample
    randomly selecting participants from different subsets of the population
  46. Random Assigment
    allows researcher to have control over chance variables. both groups should be relatively the same except for the esposure to the IV
  47. Experimental Group
    the subjects recieving the IV
  48. Control Group
    not exposed to the IV, the subjects the experimental group is compared to.
  49. Statistical Signifiance
    the results of the study are unliley to have occured simply by chance.
  50. Validity
    extent to which the researcher can claim that what was found was actually the result of something the researcher did. the fewer alternative explanations that can be offered the higher the validity
  51. Reliability
    refers to consistency and repeatability of scores of an experiment. use test and retest method.
  52. Meta Analysis
    taking a bunch of different studies and analyzing them as a whole
    • find a Research problem or question
    • determine your Variables
    • Design your experiment

    • formulate Hypothesis
    • Operationalize your variables
    • determine the Population

    Independent / Dependent variables

    • Expose
    • Analyze
    • Publish
Card Set
AP Psych
Ap Psychology