Final Exam

  1. Z scores
    *One type of standard scores are the z scores

    sd-is the how the typical score deviate

    the bigger the numerator the bigger the z. the bigger the distance

    Z tells you for a given x how far away it is in terms of its mean.

    The lower the Z the lower you are away from what is typical



    -z score if negative is below the standard deviation

    -z score if positive is above the standard deviation score

    Z scores measure how many standard deviations you are away from the mean. You need to have a zero in front of these scores
  2. Independent Variable
    only exist only when you are doing an experiment.
  3. Why can we never accept the experimental hypothesis?
    *We can never be sure that the difference occurred because of the reason indicated and not due to chance!

    *Support for the experimental hypothesis comes from rejecting the null.

    Not matter how much support you have of something, that does not mean that it is accepted or approved or even correct. Reliability is not the same as validity.
  4. Decisions regarding the null….
    *Reject the null hypothesis Support the scientific hypothesis

    *Not reject, or fail to reject the null hypothesis, Refute the scientific hypothesis
  5. Alpha level
    consider your “alpha level” as your willingness to make a huge mistake
  6. Probability
    *Determinationof how likely a result occurred by chance. *p<.50

    -Result due to chance in 50 our of 100 cases. Probability can not be a 1 or greater than because it will be 100% or more

    *p<.001
  7. *Statistical significance
    -Reject the null hypothesis and support the scientific hypothesis
  8. If null hypothesis is true there is
    • no relation between the variables. Type I error is most severe. You don’t want to Create an atmosphere where you cant trust each other.
    • If you cant trust the foundation than you cant trust future discoveries. Claiming to the world that there is a relation between variables and supporting the scientific hypothesis and rejecting the null hypothesis. We ought to have failed to reject the null
    • hypothesis. We don’t support or accept nothingness as a result. Failing to find relations doesn’t mean the null is true. It could be number of examples on how
    • you failed. (right statistics haven’t been invented yet, wrong tools). Type one you are misleading people.
  9. p-value
    • probability given your actual data is the exact prob that you will be committing a type I error when you support the scientific hypothesis and reject the null. Exact probability that you are wrong. There is always going to be a p-value and tells you what are the chances of making a type I error. We want our p-value to be less than alpha. If probability lower than alpha than we have statistical
    • significance. Alpha level determines the threshold of acceptability. We want to make sure that the likelihood of being wrong is low. Alpha level is our
    • acceptable risk of committing a type I error. We only want a 5% chance that we are wrong because it is a small risk.
  10. Type II error
    • - fail to reject the null and refute the scientific. You did not find any relation. You have nothing in your data. The error is you ought not to have done that you ought to have reject the null. You ought to have
    • supported the scientific. Missing the relation between the variables. Why is this a less severe error- your not misleading anybody you saying there is no support for the scientific hypothesis and refuting the claim. You fail to reject the null….. . . . .. . . . less severe because your leaving it to the
    • future generations to find a relation between the relevant errors. We would rather miss things to be found instead of training people with false
    • information. Miss realtions that are actually there. We trust that the next generation will find the
    • relation.
  11. Nondirectional hypothesis-
    adds up to alpha level and allow a 5% chance of committing an error
  12. CORRELATION
    is a measure of the degree of relationship between two variables. Necessary for causation
  13. Causation relation
    Asymmetrical
  14. Prediction
    if there is a correllaion between two cariables, then from a person’s score on one
  15. Quality of prediction
    In general the greater the correlation, between the tow variables, the more accurately we can predict the standing in one from the standing in the other.
  16. Bivariate distibtuions
    • distributions that show the relationship between two
    • variables

    The relationship between two variables can be summarized usually with a straight line

    -a steeper line indicates a closer relationship between two variables
  17. How is the mean like the variance
    They are both “averages.” at their core they are telling you a typical score. The variance is a mean deviation score.
  18. How is the mean like the standard deviation
    More intametly related. The sd is closely related to the mean scores. Using kind of the same measured with the variance you cant interpret that at all.
  19. Statistical hypothesis
    basic claim that two variables are related to one another. The difference that you find is the variable that is related to the cause
  20. Null hypothesis-variables are
    not related.

    Unrelated to the other variable given

    **very important that you talk about both hypothesis.
  21. failure doesn’t mean the null hypothesis is
    true. Be more creative to find similarities. If you have a good theory than be motivated to find something
  22. Alpha
    • - Alpha level is you “a priori” threshold. Before you even do your results, establish a standard before you interpret your results. Willingness to make a certain kind of mistatke. From that start you kno you
    • understand that there is a certain chance that you could be making a mistake
  23. Scatterplot
    A graph of a bivariate distribution consists of dots at the point of intersection of paired scores
  24. The Hugging Principle
    The closer the points cluster around the line, the stronger the relationship between two variables
  25. Positive correlation
    A linear relationship in which:

    -high scores on the first variable are generally paired with high scores on the second variable.

    - Low scores on the first variable are generally paired with low scores on the second variable.


    Variable of hours doesn’t increase or decrease but people’s hours of studying decreased or increases.
  26. Negative correlation
    A linear relationship in which:

    -High scores on the first variable are generally paired with low scores on the second variable

    -Low scores on the first variable are generally paired with high scores on the second variable
  27. No Correlation
    Both high and low values on the first variable are equally paired with both high and low values on the other .
  28. How to interpret r?
    *the correlation coefficient r ranges from -1 to +1

    • -+1
    • indicates a perfect positive correlation


    -don’t have a 0 if front of the decimal

    *the correlation coefficient r ranges from -1 to +1

    - (-1) indicates a perfect negative correlation

    - r = no correlation zero
  29. Corelation does not prove causation
    finding a relationship between variables X and Y does is necessary but NOT sufficient condition for us to conclude that there is a cause- and- effect relationship between them. Causation is something you get when you do an experiment. Correlation is a necessary part of causation. Non exmperimental and experimental they are about relations between variable.
  30. t-test for a single sample
    *Comparing a sample mean to a population

    *estimating population variance from sample

    -biased and unbiased estimates

    n-1 is the same thing as our degrees of freedom

    • *df=n-1
    • Degrees of freedom- number of scores “free to vary”

    SSx/df=SSx/n-1
  31. The higher the r2 value the
    • better you are able to predict with some degreee of
    • certainty.
Author
LaurenFleming
ID
73103
Card Set
Final Exam
Description
Class Notes
Updated