1. units of analysis
    We try to show the distinctive characteristics ofpeople who engage in these behaviors by takingindividuals as the units of analysis, that is,the source of the data from which we are able tomake generalizations.
  2. areal groupings
    As wewill see, much investigation is also concernedwith geographical, or areal groupings, ofpeople.
  3. aggregate data
    Wheneverwe combine information about thebehaviors, attitudes, or other attributes of individualsto represent statistically some social unitcomprising those people, we are using aggregate data.
  4. levels of aggregation
    These social units will vary insize and comprehensiveness. Researchers might,for example, study the same phenomenon atprogressively higher levels of aggregation.
  5. census block
    A territorial grouping, or geographical unit, thatmayincludefewer than a hundred households is called a census block.
  6. census tract
    Although bigger than the census block, the census tract is still a relatively small area generally containinga population of between 1,500 and 8,000persons.
  7. cross-cultural comparisons
    Moreover, nearly all nations maintain censusrecords, and this makes it possible to engagein cross-cultural comparisons from onecountry to another.
  8. errors of coverage
    theremaybe errors of coverage. Inevitable,counting mistakes will be made in the originalcollection of data from individuals.
  9. classification errors
    there will be unavoidable classification errors. Census data, collected everyten years, by questionnaire. As would be the case in evaluating thequality of any data collected via questionnaire,we know that respondents will lie about certainissues.
  10. reported rate & true rate
    Because of a varietyof errors in the compilation of any data,there will always be a discrepancy between the reported rate of some phenomenon and thetrue (or actual) rate.
  11. Social indicators
    Social indicators are used to measure change in such conditions as poverty, public safety, education, health, and housing.
  12. time series data
    Social indicatorsare generally presented as time series data;this makes it possible to chart changes over time.
  13. disaggregation
    It is possible to rework the available data sothat we can see differences between areas ofthe country in life expectancy rates. This process is called disaggregationtaking anexisting unit of data and breaking it into fineror less comprehensive units.
  14. demographers
    demographers (those who study population trends)
  15. demographic transition
    demographic transition. After remaining relatively stable because of simultaneously high birth and death rates, national populations begin a rapid increase as death rates decline (because of improvements in health and standardof living) and birth rates remain stable.
  16. ecological inference & ecological fallacy
    ecological inference is the name given to efforts to infer individualbehavior from aggregate data. Unfortunately, using aggregate data to infer individual behavior can be problematic, so we must be wary about employing data collected from and about groupsin order to make inferences to individuals. The ecological fallacy involves making such an illegitimate shift of inference.
  17. atomistic fallacy
    The ecological fallacy can be committed inreverse – to make incorrect statements aboutgroups on the basis of data from individuals.When we try to test hypotheses about groups when we have only individual-level data,we riskcommitting what has been called the atomistic fallacy.
  18. comparative sociology
    Typically, however, academic tradition assigns the term“comparative research,” or comparative sociology, to studies that include two or more nations or cultures.
  19. evolutionary theory
    The proponentsof evolutionary theory viewed society as passing through a series of stages.
  20. macrolevel
    Another strength of the comparative method is that it allows us to test theories that specify as variables macrolevel structures or behaviors – that is, characteristics of entire societies.
  21. ethnography
    Typically, an anthropologist goes to live with a people in a distant land, for an extended time and then writes an ethnography based on this fieldwork.The ethnography describes the society’s organizations, kinship system, language, religious beliefs, and so forth.
  22. area studies
    the social and political research carried out in asingle foreign country (often referred to as area studies).
  23. galton’s problem
    Thisdiffusion of culture makes it more reasonableto talk about criteria for measuring the degree of independence between societies, rather thanto look for criteria to measure complete independence. This problem of lack of independence between units is referred to as galton’s problem
  24. data banks
    the Roper Public Opinion Center, has stored data from several thousand survey research studies conducted in almost seventy countries. Such repositories (referred to as data banks) can be very useful to those interested inthe secondary analysis of survey research data to undertake comparative research.
  25. conceptual equivalence
    conceptual equivalence is central to allcross-societal research; the concepts used mustbe similarly meaningful in all the cultures being compared.
  26. measurement equivalence
    The problem of measurement equivalence, that is, of operationalizing theoretical concepts in such a way that the resulting measures are comparable across all societies being considered, must be confronted by all comparative researchers.
  27. back translation
    One strategy that has been developed for coping with this problem is back translation. First, we have one bilingual person translate the questionnaire from English into the language of the society we are considering. Then, we have a second bilingual person, who has no knowledge of the original English version, translate the questionnaire back into English.
  28. courtesy bias
    A final factor that can affect the interview situation is courtesy bias. This phenomenon occurs when respondents provide information that they feel will please the interviewer or that they feel is befitting to people of their status.
  29. applied social research
    it is an example of applied social research, a problem-solving effort that has taken social investigation out of the ivory tower of academic endeavor and into real-world settings.
  30. accountability
    the results of evaluation research are used as measures to improve accountability. If practitioners can cite data from evaluation research to demonstrate the effectiveness of their programs, then the initiatives may not be cut or denied support.
  31. basic research
    the distinction between evaluation research and basic research. Whereas the latter is typically focused on the gathering of general information in the testing of hypotheses or adding to knowledge in some systematic way, evaluation research is typically focused on the immediate, practical use of knowledge.
  32. formative & summative
    The evaluator’s avowed purpose may be formative (trying toimprove the program) or summative (rendering a judgment regarding the program’s mission and/or effectiveness)
  33. needs assessment
    To answer these questions a needs assessment a comprehensive evaluation of the demand for some new program or service – is performed. Need assessments typically rely on existing data sources, surveys, and in-depth interviews.
  34. process evaluation
    These questions are answered via process evaluation, which investigates the actual implementation of a program, including possible alternative delivery procedures.
  35. impact evaluation
    impact evaluation is typically more detailed and covers a longer time period than just measuring program effectiveness. It looks at both intended and unintended consequences of the whole program.
  36. cost-benefit analysis
    Other types of summative evaluation, cost effectiveness and cost-benefit analysis, address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
  37. in-house research
    An increasing number of evaluation research exercises are not being conducted by outsider “experts,” but by regular employees of organizations hired to perform in-house research.
  38. facilitator
    Two other innovative approaches to evaluation, action research and outcome mapping . . . Using these techniques, the evaluator acts primarily as a facilitator, that is, a person who helps members of the organization to maximize the value of the evaluation for themselves.
  39. theory failure
    Moreover, it is possible for a program to produce immediate positive results but ultimately fail because the theory underlying the formulation of the problem was faulty. An example of such theory failure might be a training program that succeeds in producing competent building trades people but does not result in the participants’finding employment, as had beenexpected
  40. latent consequences
    thought should be given to the unintended effects, as well as to the stated objectives of any program. Strategies for change carry the potential for latent consequences, which may or may not workagainst the original rationale for intervention.
  41. stakeholders
    One useful way to construct this plan is to make a list of relevant stakeholders – various categories of individuals and groups who make up the organization’s client base and its collaborative partners and suppliers in the community,
  42. cheerleaders
    Care must be taken to ensure that the interviewees are not merely cheerleaders, reliable acquaintances of the sponsors of the study,
  43. focus group
    A focus group is a small (six to ten person) collection of individuals brought together for a one to two hour discussion of some issue, idea, product, or program.
  44. one shot study
    One technique is to study a group of people from the target population after it has been exposed to a program that has caused some change. This approach is called the one shot study.
  45. base line measurement
    there is no base line measurement – no assessment of the knowledge, attitudes, and behavior of the businessmen toward women before their exposure to the seminars.
  46. before and after
    An alternative technique is to study a group of people both before and after exposure to a particular program.
  47. controlled experiment
    The controlled experiment provides such a check on unaccounted-for variables and is therefore considered to be an ideal research approach for evaluation research.
  48. randomization
    The usual way to do this is through the random assignment of half of the selected people to one groupand half to the other group (the technical termfor this process is randomization).
  49. goal-based evaluation
    an instance where the administra to rmay be at odds with the researcher, who is attempting to carry out a goal-based evaluation.The social scientist typically wants to express the objectives of an organization in terms specific enough to permit the organization’s behavior to be measured; the administrator may simply want the program or agency to remainin operation.
  50. goal-free evaluation
    An alternative strategy that reduces the potential for distrust between a researcher and program personnel is the so-called goal-free evaluation. Proponents of this model believe that information should be gathered that reflects a array of actual program accomplishments in response to general social needs, and that data collection must not be confined solely to the more narrow and specific list of goals that may appear in a program’s official statement of purpose.
  51. shifting program
    Itis common to find a shifting program, one that is not executed in a perfectly predictable manner.
  52. action research
    One way of dealing with the fears and suspicions of program personnel is to design an action research evaluation in which the focus is on providing feedback as the program progresses. Here the idea is to offer suggestions for improvement along the way, rather than making one final pro or con judgment.
  53. outcome mapping
    Another genre of participatory evaluation is called outcome mapping. With the aid of facilitators, program staff and project participants focus on changes in their own behavior, relationships, and activities when planning strategies, monitoring performance, or documenting outcomes.
  54. boundary partners
    The outcomes of a program are expressed in terms of changes in the behaviors of its boundary partners“individuals, groups, and organizations with whom the program interacts directly and with whom the program anticipates opportunities for influence”
  55. 1. eyewash:
    1. eyewash: an attempt to justify a weak or bad program by deliberately selecting only those aspects that “look good.” The objective of the evaluationis limited to those parts of the program thatappear successful.
  56. 2. whitewash:
    2. whitewash: an attempt to cover up program failure or errors by avoiding any objective appraisal. A favorite device here is to solicit “testimonials”that divert attention from the failure.
  57. 3. posture:
    3. posture: an attempt to use evaluation as a“gesture” of objectivity and to assume the pose of “scientific” research. This “looks good” to the public and is a sign of “professional” status.
  58. 4. postponement:
    4. postponement: an attempt to delay needed action by pretending to seek the “facts.” Evaluative research takes time and, hopefully, the storm will blow over by the time the study is completed.
  59. multiple indicators
    An important way to increase both the reliability and the validity of abstract constructs – such as “happiness,” “alienation,”“tolerance,” and “anxiety” – is to operationalizethem by using multiple indicators of the same phenomenon. A number of survey questions may be combined to assess the strength of a particular variable, the degree to which it ispresent, or its intensity.
  60. composite measure
    Indexes and scales are devices for creating a single composite measure of behavior and attitudes out of a number of related indicators.
  61. index score
    an index score is obtained by assigning numbers to the answers given, in relation to the presence or absenceof the variable under investigation.
  62. data reduction
    The index score makes comparison easier, but it is also useful for purposes of data reduction; that is, it expresses a wide range of data in abbreviated, numerical form.
  63. base period
    The point in time to which today’s prices are compared is called the base period.
  64. domains
    we might develop indicators from the followingthree general domains, or components, ofauthoritarianism: physical, moral, and political.
  65. face validity
    These indicators, which could be operationalized through intensive interviews, questionnaires,or psychological tests of personality, have face validity. They are logically related to the overall concept being measured.
Card Set
Bolded words in book; CH 14-17