-
Ethical issues in social research
- Voluntary participation and informed consent
- No harm to the participants
- Anonymity & confidentiality
- Analysis/reporting issues
- Right to receive services vs. responsibility to evaluate efficacy
-
Explain voluntary participation and informed consent
- people are fully informed of consequences of participation and agree to take part
- ---research involves intrussion into other's lives: researchers need to be empathetic towards person, think about risks so can have resources available
- Cannot force participation
- ----Example: if someone just says we're gonna ask you question & 6 months later we'll come back and ask more, person would want to know what, confidential?..etc
- Need to be sensitive to implied sanctions
- Problem and scientific concern of gerneralizability of results
- Deception(need to conceal nature of research form those observed) --be ethical
-
Explain no harm to participants
- Never injure subjects: injury could include embarassment revealing sensitive info, disclosing identity
- Safeguard: human subject (interal review board)
-
Explain anonymity and confidentiality (protecting identity)
- Researcher can't identify a response with a given subject -- problem with survey follow ups (bc surveys are usually anonymous)
- Can identify a response to a given subject but promises not to do so publicly (confidential) through use of coding by # and limiting # of persons who can make connection
-
Explain analysis/reporting issues
- Make short comings known to readers
- Report negative as well as positive findings
- Expost facto rearranging of data and ethical
-
Explain right to receive services vs. responsibility to evaluate efficacy
- Two values in conflict: 1) to help people in need, and 2) to scientifically test positive/negative impact of services
- Are other services available? How serious are the problems being dealt with? Using waiting lists
-
Ethical issues regarding gender & cultural bias/insensitivity
Needto be sensitive to potential issues of bias
-
Guidelines to avoiding ethical issues with gender & cultural bias
- Immerse yourself in the culture before designing research methods
- Use minority scholars & community members
- Use culturally sensitive language (pretest questionaire)
- Use bilingual interviewers where necessary
- Avoid unwarranted focus on deficits of minority group
-
What is internal validity?
refers to the confidence we have that the results of a study accurately depict wheter one variable is or is not a cause of another-- to the extent that the preceeding three criteria for inferring causality are met, a study has internal validity
-
What is external validity?
can generalize the findings of a study to settings and populations beyond the study conditions
-
The requirements for inferring causality
- 1) The cause precedes the effect in time
- ---e.g. family member with schizo - lower socioeconomic status (see what the ses was at certain times, then see when this family was ses before dx w/schizo - time order and sequential)
- 2) The 2 variables must be empirically correlatied with one another (must be a demonstratable relationship)
- 3) The observed empirical correlations between the variables can't be explained away as being due to the influence of a third variable that cuases both (non-spuriousness)
- ---e.g. knee aches before it rains (both caused by change in relative humidity)
-
9 threats to internal validity
1) History: during the course of the research extraneous eventsmay occur that will confound the results. Something that coincides in time with the independent variable
-
9 threats to internal validity
2) maturation/passage of time: people are continually growing and changing, whether part of the research or not, and those changes could affect the results of the research. e.g. people going through crisis improve w/or w/o treatment, by the passage of time
-
9 threats to internal validity
3) testing: occurs when the process of testing itself enhances the performance on a later version of the test without any corresponding improvement on the construct the test is trying to measure
-
9 threats of internal validity
4) instrumentation: if we use different measures of the dependent variable the tests may not be comparable and we may not be mearsuring the same thing each time
-
9 threats of internal validity
5) statistical regression: with repeated testing on almost any assessment scores will fluctuate from one time to the next; especially so if we chose persons on the basis of extreme scores - either very high or very low - subsequent scores will regress (move back) towards the mean of the distribution of scores, we could show "improvement" simply based on the regression and not bc of the effect of any intervention
-
9 threats of internal validity
6) selection biases: comparisons don't have any meaning unless the groups we are comparing are comparable. Common in evaluative studies not usiing matched/equivalent groups
-
9 threats of internal validity
7) experimental mortality: subjects drop out of a social experiment before completion affecting statistical conclusions
-
9 threats of internal validity
8) ambiguity about the direction of causal inference: occures when there's ambiguity about the time order of the independent and dependent variables. --e.g. a study finds that completers of an AODA program are less likely to be abusing substances than drop outs - did the program influence participants to abuse less or did abstinence help people complete the program
-
last threat of internal validity
9) diffusion on imitation of treatment: occurs when services recipients or providers are influenced in ways that diminish the planned differences in treatment conditions (independent variables)
-
What is pre-experimental design?
- 1) One shot case study
- X O
- X is introduction of a stimulus (independent variable) and O represents observation
- offers no way to really assess the impact of x
- fails to control for any threats to internal validity
- 2) One group pretest posttest designO1 X O2
- Assessing the change on the dependent variable before and after introducing the independent variable X
- Assess correlation and controls for time order, doesn't account for other factors that could influence the relationship, e.g. threats such as hx, maturation, testing and statistical regression
-
What is experimental design?
- It attemps to provide maximum control for threats to internal validity
- Difficult to achieve is SW research
- Essential components of experimental design:
- 1) random assignment to experimental and control groups
- 2) introducting the independent variable to experimental groups and with/holding it from the control group
- 3) Comparing the amount of change for each group on the dependent variable
-
Examples of experimental design
- Pretest - post test control group design
- R = random assignment
- R O1 X O2
- R O1 O2
- Controls for all threats except testing and instrumentation
- Solomon four group designR O1 X O2R O1 X O2
- R X O2
- R X O2Highly regarded --- rarely used
-
Understand the logic of single-subject design and its use in SW practice
- Logic: using time series to evaluate impact of programs/interventions
- Treated groups (individuals) become own control group
- O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 O11
- Repeated measures attempt to identify stable trends
- If change occurs after introducing independent variable its plausible that it caused it
-
Understand the logic of single-subject design (SSD) and its use in SW practice
- ARGUMENTS FOR SSD
- Can identify w/good internal validity intervention that seems to work
- Accumulated findings can be used to identify efficacy of interventions or agency effectiveness
- Can be used to examine if research done on larger groups is effective in a particular setting/individual
- Useful tool to monitor changes in target behaviors and client process
- ARGUEMENTS AGAINST SSD
- Not good in crisis situations
- Heavy caseloads limit its use
- Clts may resent heavy monitoring
-
-
Define establishing baselines
the phase of repeated measures that occurs before intervention or policy is introduced; a control phase = serves the same function as a control group does in a group experiment
-
How to interpret a graphic representation of the relationships
-
Be familiar with the designs AB, ABAB, etc
-
Strengths of survey research
- Advantage of economy and the amount of data that can be collected
- Useful when we describe the characteristcs of a large population
- Make large samples feasible
- Findings may be more generalizable
- Enable us to analyze multiple variables simultaneously
- Flexible
- Stonger on reliability
-
Weaknesses of survey research
- Weak on validity and strong on reliability
- Artificial
- Potentially superficial
- Not good at fully revealing social processes in their natural settings
-
What is the logic behind the use of probability sampling?
Essential decisions about what will be observed and not observed, the process of selecting observations. Probability samples must contain same variations that exists in the population
-
What does probability sampling allow you to do in a piece of research?
- More representative because they redline bias
- Probability theory permits us to estimate the accuracy or representativeness of the sample
-
The uses of probability in social research
-
The uses of non-probability in social research
- 1) Reliance on availiable subjects
- -risky
- -asking random people questions
- -not representative of the entire population
- 2) Purposive or judgmental sampling
- 3) Quota sampling - if group has clearly defined categories you want to include people from each one
- 4) snow ball sampling - sample begins w/a few relevant subjects and expands through referral
-
Ways to reduce sampling error (Larger - n- and homogeneous sample)
-
Purposes of program evaluation
- to assess and improve the conceptualization, design planning, administration, implementation, effectiveness, efficiency and utility of human service program
- *Not a specific research method, can employ all methodologies
-
Purposes of program evaluation - summative
Assess the ultimate success of program and problems in how programs are being implemented/Assess success by examining changers in target behaviors after intervention takes place
-
-
-
Purposes of program evaluation - formative
Focus on obtaining info that is helpful in planning the program and improving its implementation and performance/Assessing problems in how program is being implemented assuring fidelity of the intervention and essential program structure to goals and objectives
Assessing data needed in program planning and development
-
-
-
Specific approaches to needs analysis
-
Relevant political considerationsin program evaluation
Important: findings can provide ammo for the opponents or supporters of a program/agency
-
logistical
problems: getting subjects to do what they are supposed to do e.g. getting instruments distributed, uysing them correctly and returning them
-
How to minimize logistical problems:
- learn about stakeholders
- find out who wants the eval and way and who doesn't
- involve stakeholders in planning stages to foster cooperation
- engage program in planning and implementation
- tailor the eval report in a form/style for those who need it, clarity and succinctness
-
Strengths/Weaknesses of qualitative research method
- Strengths:
- flexibility
- cost
- depth understanding
- Weaknesses
- Subjectivity and generalizability
- Potential biases
-
Strengths/weakenesses of unobtrusive research method
- Definition: learning about human behavior by observing what people leave behind them - detective work (without them knowing they're being observed)
- Strengths:
- +costs less
- +take less time
- Weakness:
- -may be outdated
- -validity
- -missing data
-
Content analysis
- is a method of transferring qualitative materials into quantitative data
- involves coding and tabulating occurences of communication content. Coding involves manifest (visible surface content) and latent(underlying meaning) meanings; end result is numerical
- Well suited for answering who says what, to whome, why, how and to wat effect
- Sampling method based on what we will look at determining units of analysis and units of observation ego portrayals of MI and TV - could use random, systematic or cluster sampling
- +Strengths: cheap and easy; few staff or equipment needs; safety; can redo it w/minimal difficulty
- -Weakness: limited to recorded communications, validity issues
-
Using existing data (secondary data analysis)
- Limited to what exists
- Wast the purpose for compiling data equivalent to yours?
- Ecological fallacy is potential problem; using data collected at one level and applying to another
- Reliability issues: e.g. FBI uniformcrime reports, issues with accuracy bc voluntary; trendy and record keeping affects budgets
- Why's this popular: easier and faster
-
Historical research (comparative analysis)
- Main source is historical records/data, not just limited to communications, also comparing social processes (theoretical structures) over time to gain an understanding
- Important differences between primary and secondary data sources: can always be biased in some way; often depending on why the documents were created and how they were used in the period created
- Need for triangulation, collect historical data from several sources and see where the truth is
|
|