-
What are the 4 approaches to kowlege?
- authority
- personal experience
- rationalism (suing reasoning and logic)
- empiricism (systematic observations)
-
Five Limitations of Personal Experience (commonsese psychology)
- 1. Confirmation Bias and the DIscounting Problem
- 2. The Limited Data Problem
- 3. The Expectations Problem
- 4. The Baserate/Comparison group problem
- 5. The Pleasant Truth Problem
-
confirmation bias and the discounting problem
limitation in comonsense psychology
we tend to try to seek out information that is consistent with our expectations/beliefs and we tend to discount information that is inconsistent with those expectations
-
The Expectations Problem
limitation to commonsense psychology
our expectatiosn tend to influence the way we interpret events
-
the pleasant truth problem
- a limitation of commonsense psychology
- we tend to belive things that make us feel good (things that seem right)
- this could influnece the conclusions we make abotu beavior
-
scientific method
a method that is more likely to lead to the right answer, and a process for understanding the world that enables to correct inevitable mistakes
process of constructing, testing, refining theories about natural phenomena through use of systematic and empirical observations
theory --> generate predictions --> empircal observations --> test the observations with the theory
-
what makes a theory good
- generative - generates new ways to think abotu the world
- makes predictions - risky ones
- can be tested (Falsifiable)
- are simple (parsimonious)
-
the process of reasoning in research
- have a theory - develop predictions based on theoreticcal principles
- design an experiment and take observations
- get data
- this data is consistent/inconsistent with the theory
- then we make a new or better theory
-
whats the best way to test your theory?
- more useful to test cases in which the theory is disproven rather than rpoven
- provides boundaries
- theories must be testable!!
-
signals that it is actually pseudoscience rather than science
- failures are rationalized or explained away
- reliance on anecdotes - personal experiences
- lack of tests
- lack of supporting evidence
- the 5 limitations of commonsense psychology!
-
Hypotheses
- informally - an explanation of a behavior/event
- formaly - a prediction concerning the relationship between variabls (IF... then)
like theories they must be testable, falsifiable, parsimonious
-
Research things to be aware of
- uninteresting studies - obvious outcomes
- unimportant research - who cares
- unnecessary research - why spend money on that?
-
Sources of Ideas for Research
- everyday life
- practical issues - try to solve a problem
- based off of past research (refine it, increase generality, better control)
-
How to develop a hypothesis
- 1. state the hypothesis in gneral terms - the question/problem
- 2. operationalize the hypothesis - what are you measuring, what are you manipulating
- 3. what methods will be employed - how to test, look at the relationship
- 4. what results do you anticipate - how will they confirm/disconfirm your hypothesis?
-
Theory Vs Hypothesis
- Theory - organized framework of principles that tries to explain some set of facts about the world - must be tested and evidence shown for it
- Hypothesis - a specific, testable prediction, related to a particular study, describes expectations abotu outcomes
-
1947 Nuremburg Code
- 26 Nazi physicians tried for research atrocities
- resulted int eh Nuremburg Code - first internationally recognize code of research ethics
- voluntary/informed consent
- favorable risk/benefit analysis
- right to withdraw without penalty
-
1964 Declaration of Helsinki
- same principles of Nuremburg code and
- interest of the subject is a higher priority than the interests of society (can't just do a study that will benefit the common good)
- every subject should get the best known treatment
-
National Research Act of 1974
- established the IRB - to regulate research on humans
- required IRB approval for most research studies
- defined the procedures an IRB must follow
- established criteria that an IRB must use to approve studies
-
US Belmont Report - 1979
- 3 basic ethical principles
- Respect For persons - individuals as autonomous agents
- Beneficence - protect persons from harm, maximize benefits
- Justice - benefits and risks must be distributed fairly
-
respect for persons
- 1st principle in Belmont Report
- decisions abotu participation must be voluntary
- people are able to exert their autonomy
- must give informed consent - understands/accepts their role as a subject
- explain teh purpose/procedure/risks/alternatives/ability to withdraw/duration
- privacy/confidentiality
- protecting those peopel who are not autonomous
-
informed consent
- understanding and willingness to participate
- understanding of possible risks and benefits and knowing that one does nto have to volunteer
-
Beneficence
- Principle #2 of The belmont report
- must secure teh well being of the subject - protect from harm and make sure that they experience the possble benefits of involvement
- maximize possible benefits/minimize possible harms
-
Justice
- Principle #3 in Belmont Report
- justice is raised when we attempt to decide who will be given an opportunity to participate and who will be excluded
-
Deception
- its often times used but its not very nice
- you must make the case why deception is important to your study
- the reason for deception MUST outweight the possible risks - cannot affect their willingness to participate
- requires debriefing! - deception must be explained
-
animal welfare
- an IACUC - institutional animal care and use committee (like an IRB - but for animals)
- same kinds of ehtical standards as for human - there are protectiosn for animals!
-
Why write abotu research?
- to communicate research findings
- enable critical evaluation
- enable replication and extension
-
good scientific writing is...
- objective
- concise and precise (doesn't mean dull)
- unbiased
-
what does the research report do...
- adds to scientific knowledge (related to old, rpovides new)
- conveys information clearly and consisely
-
why use a standard style for writing research?
- clairty of presentation
- ease of evaluation
- readers know what to expect
-
what does the introduction say?
- the reserach questions - general terms - why important
- review of previous work
- overview of current work - hypothesis, design/variables of interest
-
what does the discussion say?
- summary of patterns
- what the results mean
- how do the results relate to initial hypotheses
- implications/take home message
-
what is a construct?
- an inference based on a theory of behavior
- sometimes we can't observe directly teh things we want to study (like amount of motivation)
- requires operationalization
-
operationalization
- translating abstract concepts into concrete variabels that can be mainpulated and/or observed
- decision about translation so that you can take the abstract concept and transfer it into a concrete observation of behavior
-
reliability vs. validity
- reliability - consistency or dependability of measurement (compare occasion to occasion and measured via correlations)
- validity - whether hte measure actually related to expected behavior
-
types of validity
- face validity
- content validity
- criterion validity
- construct validity
-
face validity
it appears on the surface to measure what is intended (weakest)
-
criterion validity and the two types
- accurately predicts behavior
- predictive - predicts future performance like SAT
- concurrent - predicts present performance (driving tests)
-
construct validity
- measures the intended construct in a theoretically-motivated way - like the IQ test
- is what you are measuring really related to what you want to measure in terms of your construct?
-
extraneous variables
anything else that may influence the outcomes
-
Systematic influences
when the effects of the extraneous variables are systematic (influence certain conditions/measures more than others) they are very hazardous to experimental validity
-
confounding influences
other factors systematically vary along with teh variables of interest
-
the threats to internal validity
- history - influnced by a past event?
- maturation - internal changes?
- testing - effects of test they've seen before
- instrumentation - changes in how its collected?
- regression - groups preselected to represent extremes?
- selection - lack of random assignment?
- attrition (mortality) - did subjects drop out or otherwise disappear?
-
what is control/why do we need it
true experiments require that the researcher have contorl over as many aspects as possible - get rid of extraneous variables (so validity is not htreatened)
-
3 sources of variability
- physical
- social - demand characteristics, experimenter bias
- personality of the particpants and the experimenters
-
hawthorne effect
an effect in the direction expected but not for the reason expected
-
demand characteristics
- behavior is shaped by expectations abotu how they should behave (cues from the situation or attempts to guess the hypothesis)
- use single blind studies, or use cover stories (deception)
-
single blind studies
- a way to control demand characteristics
- participants aren't told which conditino theya re in
-
experimenter bias
- cues from teh experimenter abotu hwo the subject should respond
- self-fulfilling prophecies (teh Rsenthal effect with the good vs. bad students)
-
how would you control for experimenter bias?
- double-blind studies
- standardized coding or something
-
why use random assignment?
- eliminate self-selection bias
- make it likely that the differences between teh groups wash out
-
independent samples
- important for the samples to be random
- equal chance of being assigned to either of the groups
-
Main effects
- the effect of a single IV on a DV
- if A changes, it will affect B
-
Interactions
- how effect of one IV changes across levels of another IV
- the impact of one IV on the DV changes across levels of the other IV
-
the four types of reliability
- test/retest reliability
- alternate-forms reliability
- split-half reliability
- interrater reliability
-
confounds
uncontrolled extraneous variables, or flaws in an experiment
|
|