Soc 315 midterm

  1. theory
    explanation of observed regularities or patterns
  2. what are theories composed of?
    • definitions
    • descriptions
    • relational statements
  3. relational statements
    connect two or more variables so that if you know the value of one variable you can convey information about the other variable
  4. deterministic relational statements
    two variables that always go together in a particular way
  5. probablistic relational statements
    two variables go together with some degree of regularity but the relationship isn't inevitable
  6. theories of the middle range
    explanations of specific social phenomena (ex. job satisfaction, criminal behaviour, suicide, etc.)
  7. grand theories
    • broad sweeping historical explanations of societal change
    • general and abstract
    • (ex. feminism, structural-functionalism, etc).
  8. concept
    • general/abstract idea
    • category that serves to organize observations and ideas about some aspects of the social world
  9. deductive approach
    • theory -> observations/findings
    • researcher comes up with a theory to explain a certain phenomenon, then deduces hypothesis 
    • if the findings don't support the hypothesis, the theory needs to be revised or rejected
  10. inductive approach
    • observations/findings -> theory
    • a theory is the outcome of research, not the starting point
    • you are constructing a theory instead of testing a theory
  11. epistemological assumptions
    • notions of what can be known and how knowledge can be acquired
    • what is considered "acceptable knowledge"?
  12. positivism
    • an epistemological position
    • affirms the importance of following the natural sciences 
    • values empiricism (only phenomena confirmed by the senses can be accepted as knowledge, subject to empirical testing)
    • "value-free" science - scientists from any place or situation if given the same data must be able to come to the same conclusions
  13. deduction and induction in positivism?
    • deduction: key purpose of theory is to generate hypotheses that can be tested, allow assessments of explanations of observed laws and principles
    • induction: can also arrive at knowledge by gathering facts (systematic collection) which leads to a generalization of laws
  14. where do positivists thing scientific statements and normative statements belong?
    • scientific statements (describe how and why certain social phenomena operate the way they do) belong in science
    • normative statements (outline whether certain acts or social conditions are morally acceptable) belong in philosophy or religion
  15. interpretivism
    • sees the role of social scientists to group the subjective meanings of peoples actions
    • people use common-sense constructs to interpret their lives, and these thoughts motivate their behaviour 
    • interpretivists want to access the 'common-sense thinking' of people to interpret peoples actions and social world from the point of view of the actors
    • alternative to the social science usually done by positivists
  16. ontological considerations
    • branch of philosophy concerned with the nature of reality
    • ex. "what kind of things have existence?"
  17. If you answer 'yes' to this question what view do you hold: "Do social phenomena have an objective reality, independent of our perceptions?"
    • objectivist
    • think there is no such thing as social reality
    • relation to research: likely to emphasize on formal properties of organizations
  18. If you answer 'yes' to this question what view do you hold?: "Is what passes for reality merely a set of social constructions?"
    • constructionist
    • no objective social reality against which our conceptions and views of the world may be tested
    • relation to research: likely to focus on active involvement of people in reality construction
  19. values
    standard by which we assess each other
  20. quantitative research
    uses numbers, statistics in collection and analysis of data
  21. qualitative research
    relies on words, non-numerical symbols
  22. influences on social research
    • theory
    • practical considerations
    • epistemology
    • ontology
    • politics
    • values
  23. research methods
    the logic and techniques of collecting and analyzing data
  24. research design
    broad structure that guides the collection and analysis of data
  25. nomothetic
    explanation that applies to humanity in general, not just the people in the study
  26. criteria for being nomothetic research
    • 1. correlation - proposed cause and effect must vary together
    • 2. time order - the proposed cause must happen earlier in time than the proposed effect
    • 3. non-spuriousness - alt. explanations for the correlation observed must be ruled out
  27. idiographic explanations
    • doesn't necessarily apply to others, but helps to explain why the actors of interest behave the way they do
    • involve a detailed 'story' or description of people studied based on empathetic understanding
  28. 3 most prominent criteria for evaluating social research
    • reliability
    • replicability
    • validity
  29. types of validity for evaluating social research
    • measurement/construct validity
    • internal validity
    • external validity
  30. naturalism
    • style of research that seeks to minimize the use of artificial methods of data collection
    • social world should be disturbed as little as possible
  31. deterministic hypothesis
    • true or false
    • ex. heat water to 100C and it will boil
  32. probablistic hypothesis
    • a prediction statement?
    • ex. children from poorer homes are less likely to do well in school
  33. measurement reliability
    • has to do with the stability/consistency of measurement
    • an empirical issue
    • do you get the same result if you measure again?
  34. inter-researcher reliability
    • consistency across researchers
    • ex. clearly asked questions with no room for interviewers to modify questions
  35. inter-observer reliability
    • consistency across observers
    • clear rules for recording what you are observing
  36. measurement validity
    • are you really targeting what you want to be measuring?
    • the adequacy of the measurement -> conceptual issue
    • ex. does the number of rooms in a house measure wealth? not really
  37. element or unit
    single case in the population
  38. population
    all cases about which you are seeking knowledge, or all the cases to which your conclusions are meant to apply
  39. sampling frame
    list of all possible elements in the population from which the sample will be selected
  40. sample
    subset of a population, elements selected for investigation
  41. representative sample
    • sample that is a microcosm of the population
    • "represents" the essential characteristics of the population
  42. probability sample
    sample selected using random process so every element has equal chance of being selected
  43. non-probability sample
    • sample selected using non-random method
    • some elements are more likely to be picked
  44. sampling error
    error of estimation that occurs when theres a difference between the characteristics of a sample and those of the population from which it was picked
  45. non-response
    when a unit selected to participate in the study refuses, can't be contacted, etc
  46. census
    attempt to collect data from all elements in a population rather than a sample
  47. random sampling
    • using random number generators or something to ensure that everyone has the same chance of getting picked
    • chance and nothing else determines what elements are selected to be part of the sample
  48. systematic sample
    • selected directly from the sampling frame, ex. if 1/20 is the chance someone has to be picked, choose a random start within the sampling frame, then begin selecting every 20th person
    • need to makes sure that there is no ordering/pattern within the sampling frame
  49. stratified random sample
    • stratifying the population into subgroups by a criteria (ex. faculty), and selecting a simple random sample or systematic sample from each of the resulting strata
    • ensures sample is stratified in the same way as the population
  50. multi-stage cluster sampling
    • primary sampling unit isn't individual units of a population, but a cluster of them (aggregate)
    • ex. sampling students from universities within specific geographic regions
    • greater efficiency in data collection
    • seek diversity in primary sampling units
    • typically involve some kind of random sampling at the final stage
    • sample weights may be needed!
  51. _______ the size of the sample probably _______ the precision of the estimates it can create, and sampling error tends to _________
    increasing, increases, decrease
  52. response rate
    % of the sample that actually participates in the study
  53. non-response bias
    • the extent to which people included in the sample differ from the population as a whole
    • the people who don't answer the survey might be different somehow?
  54. face validity
    • the measure appears to reflect the content of the concept in question
    • does it sound reasonable to you?
    • assessed conceptually
  55. content validity
    • can be gauged by employing a criterion relevent to the concept in question bu on which cases differ
    • does the item (or scale) cover the range of possible content?
    • assessed conceptually, typically by experts in the field
  56. convergent validity
    • might be gauged by comparing it to measures of the same concept developed through other methods
    • does a (new) measure of a concept correlate with other accepted measures of that concept?
    • assessed empirically
    • ex. student attendance rates and student satisfaction surveys
  57. construct validity
    • seeing whether concepts used in the research relate to each other in a way that is consistent with what their theories would predict
    • is a (new) measure of a concept correlated with other variables that you predict (theoretically) should be related?
    • assessed empirically
    • ex. assignment - education and environmental concerns
  58. internal validity
    is the independent variable really affecting the dependent variable or is something else responsible?
  59. spurious relationships
    is the presumed cause (x) really responsible for the outcome (y) or is there a third variable affecting both?
  60. things needed to prove causality?
    • evidence of correlation
    • temporal ordering (cause precedes effect)
    • non-spurriousness (absence of alternative explanations for the correlation)
  61. external validity
    can you generalize from this study to other settings? the population?
  62. sources of error in survey research? (6)
    • sample bias
    • random sampling error
    • non-response error
    • measurement (how you ask the question, reliability and validity of measurement)
    • data preparation error
    • interpretive error
  63. confidence intervals
    can statistically estimate the amount of random sampling error
  64. sample weights
    • fractions used to generate population estimates from disproportionate stratified random samples
    • can also be calculated in some cases to handle different levels of non-response
  65. what to do if you can't create a random sample?
    • consider a non-probability design
    • build it to make sure it is as representative as possible
    • try to identify potential bias
  66. voluntary samples
    • volunteers are self-selected on the basis of interest in the subject
    • most problematic with bias
    • researcher has almost no control over how the sample is generated
  67. convenience sample
    • non-probability sample
    • subjects are conveniently available to the researcher (ex. psych undergrads required research participation)
    • not the best way to construct a representative sample
    • external validity problems
  68. quota/judgement/purposive samples
    • non-probability
    • used to find members of a highly specific population or build a reasonably representative sample of a larger population
    • ex. going out and looking for people to interview in certain locations
  69. snowball samples
    • non-probability
    • sample members provide names of additional potential sample members
    • try to increase representativeness by starting several "snowballs"
    • researcher has some control over sample composition
  70. criteria for conducting effective surveys
    • obtaining a representative sample
    • obtaining reasonably high response rate
    • obtaining valid and reliable responses to questions
    • efficient (time and money) data collection
  71. response sets
    • sample members respond to a set of questions in a similar way, responses are being motived by something other than the questions being asked
    • boredom
    • social desireability
    • acquiescent
  72. acquiescent
    trying to be agreeable by answering the same way (ex. agree, agree, agree)
  73. reasonable goal (in today's age) for response rates?
    over 50%
  74. don dillman
    • the "tailored design method"
    • an extension of the social exchange theory
    • explains why individuals are motivated to engage in certain social behaviors and not others
    • need to install trust in participants, tell them how important their participation is, make questions interesting/engaging
  75. guidelines for constructing survey questions
    • establish researcher-participant relationship to increase participant engagement
    • don't make participants do your work
    • obtaining reliable and valid information
  76. keys to professional-looking questionnaries?
    • covers
    • nice paper
    • large enough font
    • lots of white space
    • return-address envelope
  77. what makes an effective cover letter?
    • establish legitimacy
    • explain requirements of respondents (survey content, time, additional expectations)
    • address ethical issues
    • concise, professional
  78. anonymity or confidentiality?
    • anonymity: you don't know who the answers are coming from
    • confidentiality: you won't tell anyone the info given by respondents
  79. response effects
    • factors that can lead to systematic measurement error
    • ex. task effects, interviewer effects, respondent effects
  80. halo effects
    • bias due to missing response options
    • bias due to unbalanced response scales
    • bias due to provision of selective information (don't tell you as much information as you want)
  81. pros of closed questions
    • easy to process
    • enhance comparability of answers
    • some provide question clarification for respondents
    • can  answered quickly and easily
    • reduce risk of interviewer/transcriber bias
  82. cons of closed questions
    • answers may lac sponteneity and authenticity
    • must make sure answerable categories don't overlap
    • can be difficult to make exhaustive
    • respondents may differ in interpretations of forced-choice answers
  83. pros of open-ended questions
    • respondents can answer in their own terms
    • allow unusual, maybe unexpected responses
    • since the questions don't suggest answers, you can tap into a participant's knowledge and ideas about the issue 
    • can maybe generate fixed-choice answers
  84. cons of open-ended questions
    • time consuming to record
    • answers have to be coded
    • prospective respondents may put off having to write out a full answer
    • may be inaccuracies in writing down exactly what respondents are saying
  85. solutions for high rate of non-reponse with open-ended questions?
    • better interviewers
    • sharpened questions
    • not too many open-ended Qs
  86. what should you keep in mind when designing the questionnaire, observation schedule, and coding frame?
    • the data analysis
    • stats techniques used depend on how a variable is measured
    • size and nature of a sample can limit the suitability of certain kinds of stats
  87. levels of measurement
    • nominal
    • ordinal
    • interval-ratio
  88. nominal variable
    • categorical
    • composed of categories that have no relationship to each other except that they are different
  89. ordinal variable
    • values can be ranked
    • ex. Likert-style questions
  90. interval-ratio variable
    • based on a unit of measurement
    • takes the form of actual numbers
  91. measures of central tendency for univariate analysis?
    • mean
    • median
    • mode
  92. measures of dispersion for univariate analysis?
    • range
    • standard deviation
  93. frequency table
    • for univariate analysis
    • provides the numbers and percentages of cases belonging to each of the categories of the variables in question
    • can be created for all 3 variable types
  94. contingency table
    • for bivariate analysis
    • like a frequency table, but allows two variables to be analyzed simultaneously to examine relationships between them
  95. for cross-tabs, where should you total up the percentages?
    percentage to 100% in the direction of the independent variable
  96. bigger the sample, the _______ the confidence interval
    • smaller
    • significance is easier to obtain with bigger samples
  97. interaction effect
    ex. in multivariate analysis, when a statistical relationship exists for some groups but not for others
  98. when determining significance in an experiment:
    • set up a null hypothesis
    • establish an acceptable level of statistical significance
    • determine the statistical significance of the findings
    • decide whether to reject or not reject the null hypothesis
  99. goals of a structured interview
    • ensure interviewees receive the same form of questioning and the same interview stimulus
    • allow interviewees answers to be aggregated to form group rates
    • means that the variation in answers is due to "true/real" variation in the characteristic being measured rather than extraneous factors
  100. intra-interviewer variability
    an interviewer isn't consistent with the way they ask questions or record answers
  101. inter-interviewer variability
    when there is more than one interviewer who may not be consistent with one another on how they ask Q's or record A's
  102. coding frame
    • rules for assigning answers to categories
    • variation can occur in the way things are categorized bc of how the interview schedule was administered, or the way answers were recorded
  103. questionnaire
    • structured interview without an interviewer present
    • make sure they are easy to follow
    • less open-ended questions
    • shorter to reduce the risk of "respondent fatigue"
  104. reliability of a respondent throughout taking a survey is about ________
    internal consistency
  105. how to check internal consistency when constructing an index?
    • Cronbach's alpha
    • 1= perfect internal reliability
    • 0= no internal reliability
    • 0.80 usually used as an acceptable standard
  106. nominal definition
    describes in words what the concept means, ex. a dictionary defn
  107. operational definition
    • spells out the operations the researcher will perform to measure the concept
    • ex. how to measure the incidence of crime?
Author
hcunning
ID
334945
Card Set
Soc 315 midterm
Description
flashcards for up to midterm 1
Updated