Marketing research design

  1. What is measurement?
    measurement is the process of estimating or determining the magnitude of a quantity.
  2. What is a construct? How is that related to a dimension/facet? Be able to give an example of a construct and a facet!
    Constructs are simple(uni-dimensional) and complex latent variables are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured.). The facet represents the complex constructs with multiple sides.
  3. What is the difference between a conceptual and an operational definition?
    Conceptual is desire something what it is, operational is how it will be measured instrumental and autotelic.
  4. instrumental
    evaluate by touching
  5. autotelic
    touching for pure enjoyment
  6. What is the difference between a closed and open ended question? What are the advantages/disadvantages of each!
    Open ended questions allows the taker to ellaborate or explain an answer, closed ended is offering the taker s choice of this or that
  7. What terms are used to define closed items with 2 response categories versus those with more than 2 response categories? (HINT: di….. and ….)?
    Dichatomous=2 choices, multi-chatomous=2 or more choices
  8. Nominal scale
    (non-numeric) collectively exhaustive, all possible answers available, mutually exclusive. Dichatomous=2 choices, multi-chatomous=2 or more choices, pre-code
  9. ordinal scale?
    providing answers in rank
  10. Interval(scaled)
    “specified statement”or”opinion statement”scale points are odd Semantic differential-semantic space,mental space
  11. Ratio(scale
    ".0" is absolute
  12. Reliability
    consistant

    The quality of a measurement indicating the degree to which the measure is consistent, that is, repeated measurements would give the same result (See validity).
  13. Validity
    accurate

    A quality of a measurement indicating the degree to which the measure reflects the underlying construct, that is, whether it measures what it purports to measure
  14. What is inter-rater reliability?
    inter-rater reliability, inter-rater agreement, or concordance is the degree of agreement among raters. It gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. It is useful in refining the tools given to human judges, for example by determining if a particular scale is appropriate for measuring a particular variable. If various raters do not agree, either the scale is defective or the raters need to be re-trained.
  15. What is the difference between test-retest and inter-item reliability?
    Test-retest reliability is desirable in measures of constructs that are not expected to change over time. For example, if you use a certain method to measure an adult's height, and then do the same again two years later, you would expect a very high correlation; if the results differed by a great deal, you would suspect that the measure was inaccurate. The same is true for personality traits such as extraversion, which are believed to change only very slowly. In contrast, if you were trying to measure mood, you would expect only moderate test-retest reliability, since people's moods are expected to change from day to day. Very high test-retest reliability would be bad, since it would suggest that you were not picking up on these changes.

    is a statistical method used to determine a test's reliability. The test is performed twice; in the case of a questionnaire, this would mean giving a group of participants the same questionnaire on two different occasions. If the correlation between separate administrations of the test is high (~.7 or higher)[citation needed], then it has good test-retest reliability.


    Inter-item reliability The average inter-item correlation uses all of the items on our instrument that are designed to measure the same construct. We first compute the correlation between each pair of items, as illustrated in the figure. For example, if we have six items we will have 15 different item pairings (i.e., 15 correlations).
  16. What is the difference between face and content validity? Who establishes face validity?
    • Face validity pertains to whether the test "looks valid" to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers.
    • Content validity requires more rigorous statistical tests than face validity, which only requires an intuitive judgement. Content validity is most often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area (e.g., history) or job skill (e.g., accounting). In clinical settings, content validity refers to the correspondence between test items and the symptom content of a syndrome.
  17. What does predictive validity mean? How is it used?
    • is the extent to which a score on a scale or test predicts scores on some criterion measure.
    • For example, the validity of a cognitive test for job performance is the correlation between test scores and, for example, supervisor performance ratings. Such a cognitive test would have predictive validity if the observed correlation were statistically significant.
  18. What is the difference between convergent and discriminant validity?
    • Convergent validity, is the degree to which an operation is similar to (converges on) other operations that it theoretically should also be similar to. For instance, to show the convergent validity of a test of mathematics skills, the scores on the test can be correlated with scores on other tests that are also designed to measure basic mathematics ability. High correlations between the test scores would be evidence of a convergent validity.
    • Discriminant validity describes the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should not be similar to.
  19. What is the difference between a sample and a population?
    Sample (statistics), a subset of a population
  20. What is the difference between non-probability and probability sampling?
    • Probability
    • Simple random- is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, Systematic-“interval” I=N/n=12000/120=100 Stratified-minimum bias use “” system,simple Cluster-grouping people together. Non-probability Convienance sampling- convience of location or place Judgement-bias of researcher choosen who he thinks would best fit survey. Snowball-diffilcult finding subjects, using referals for larger sample. Quota-deciding necesity for survey such as race or gender so that’s the survey is more representative.
  21. What is the advantage of cluster and stratified sampling over simple random?
    • Stratified-minimum bias use “” system,simple
    • Cluster-grouping people together. Simple random- is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process,
  22. What is the major disadvantage of using a non-probability technique?
    In non-probability samples the relationship between the target population and the survey sample is immeasurable and potential bias is unknowable. Sophisticated users of non-probability survey samples tend to view the survey as an experimental condition, rather than a tool for population measurement, and examine the results for internally consistent relationships.
  23. What is the difference between sampling and non-sampling error?
    • non-sampling error is a catch-all term for the deviations from the true value that are not a function of the sample chosen, including various systematic errors and any random errors that are not due to sampling.[1] Non-sampling errors are much harder to quantify than sampling errors [2].
    • Sampling Error Random error, systematic error. caused by observing a sample instead of the whole population
  24. Be familiar with the different non-sampling errors, such as measurement, processing, etc.
    • Non-sampling error
    • Measurement error Processing error-input data wrong Non-response bias- assume people who respond late are the same as people who do not respond at all.
  25. What does the factorial design tell us? How do we use it to determine sample size?
    full factorial experiment is an experiment whose design consists of two or more factors, each with discrete possible values or "levels", and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully crossed design. Such an experiment allows studying the effect of each factor on the response variable, as well as the effects of interactions between factors on the response variable.
  26. If given a factorial design of (2 X 3 X 3), be able to explain what it means and how many
    subject you needed based on computation formula discussed in class.
    • 2*3*3= 3 variables
    • 18=different groups of people
    • x30= need 30 people in each of 18 groups
    • =540………….30=cental limit theorem
  27. What is a screening/filter question used for? Why is it so important?
    To eliminate unwanted answers so the survey taker wont be wasting ttheir time and also the researchers.
  28. What does the process of pre-coding nominal questions entail?
    Coding the answers in the survey with numbers so that when its time to record information it will be faster and easier.
  29. What is a reverse coded item?
    dissassemble
  30. validity has 2 parts
    interval,external
  31. interval includes
    • face
    • content
    • convergent
    • discrimeniant
    • predictive
  32. external includes
    • sample, sample frame, population within sample
    • -sample error
  33. relaibilty symbol
    cromback alpha, coefficient alpha
  34. survey administration?
    • increase quality data and response rates
    • 1)filter questions
    • 2)necessary ?'s
    • 3)avoid vague and misleading
    • 4) order of ?'s
    • 5)control for bias/fatigue
    • 6)pre-test
Author
Maurice
ID
44890
Card Set
Marketing research design
Description
exam 2
Updated