epidemiology final Flashcards

(159 cards)

1
Q

what is a randomized trial?

A

A group of eligible people (the study sample) is randomly assigned by an investigator to either an intervention or control condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is intervention efficacy?

A

evaluated by comparing outcomes among those who receive the
intervention to those receiving a control therapy/intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

types of randomized trials

A

1) natural experiments

2) community trials

3) cluster customized trials

4) individual-level randomization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

explain natural experiments

A

levels of exposure to a
presumed causative agent
differs in a population in a
way that is relatively
unaffected by other

extraneous factors such that
the situation resembles a
planned trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

example of a natural experiment

A

John Snow, cholera experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

explain community trials

A

experimental study where
one group of community/ies
receives an intervention and
another community/ies
does not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

example of a community trial

A

Water fluoridation trials comparing
dental caries in Grand
Rapids, MI (intervention) vs.
Muskegon, MI (control)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

explain cluster randomized trials

A

clusters (e.g. individuals in
communities, hospitals, or
other aggregates of
individuals) are randomized
and all consenting persons
enrolled

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

example of cluster randomized trials

A

influenza
vaccination in some
communities and not in
others to assess herd
immunity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

when to choose a community or cluster randomized trial

A

1) Nature of the intervention

2) Acceptability and reduced stigma

3) Are there enough sub-groups within clusters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

explain individual level randomization

A

randomize eligible individuals to an
intervention (treatment) or a control/placebo/standard of care condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

key characteristics of individual-level randomization

A

1) Randomization

2) Blinding

3) Control/placebo group vs. “controlled” trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

FDA/WHO classification of RCTs

A

Phase I, II, III, IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Phase I

A

Initial studies to determine metabolism, pharmacologic actions, and safety
of drug in humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Phase II

A

Controlled clinical studies evaluating preliminary efficacy of drug in
patients with disease , determine common short-term side effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Phase III

A

Expanded trials for gathering additional information on overall benefit-risk
relationship of the drug [Needed for FDA approval]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Phase IV

A

post-marketing trials in general population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what does randomization ensure?

A

that intervention and control groups “look alike” with respect to all other factors except for the
treatment at the time of enrollment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

randomization examples

A

Random number table

computer generated programs

sealed envelopes with randomization info

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

examples of nonrandom allocations

A

Alternate assignment of treatments

Assignment by day of the week

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

random sampling

A

ensures generalizability of survey results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

randomization ensures

A

ensures comparability of the experimental group and the control
group when the pool of study participants is large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

when is it unethical to randomize?

A

1) An effective treatment already exists

2) personal choice

3) Risks of new treatment likely to exceed risks of existing treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

why is it unethical to randomize if an effective treatment already exists

A

e.g., in trials of therapies to prevent mother-to-child HIV transmission, cannot
randomize mothers to a placebo treatment – need to provide standard of care

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
why is it unethical to randomize based on personal choice
e.g., trials of different types of contraceptives (pill vs IUD), are ethically questionable because women have the right to select a method of their choice. (Can randomize within method type e.g., pill A vs pill B)
26
what's blinding
Randomization to intervention or control condition is not apparent to everyone involved in the study during the course of the trial
27
blinding is used to
avoid bias in (1) enrollment, (2) during trial, (3) follow-up
28
3 levels of blinding
1) Single blind - participants are blinded but investigators are aware of who is intervention and control arm 2) Double blind - neither participants nor investigators know who is receiving the intervention 3) Triple blind – participants and investigators don’t know intervention assignment. And also, data analyses are conducted in a manner that is removed from the investigators
29
what's the "control" condition
The control condition provides a comparison arm by which the investigator gauges the effect of the treatment
30
controls may receive...
no treatment (e.g., placebo) if no standard of care is currently available
31
"controlled" refers to
a. Pre-specified hypotheses b. Established inclusion/exclusion criteria c. Baseline assessments d. Primary and secondary endpoints (e.g., behavioral change, HIV/HCV incidence) to evaluate hypotheses e. Carefully detailed study protocols (enrollment, treatment delivery and follow up) f. Rigorous monitoring of treatment and outcomes g. Monitoring retention & loss to follow-up h. Ethical considerations i. Analysis plans and stopping rules
32
Questions to consider prior to trial:
● What effect is expected from the intervention/therapy? ● How ”much” of an effect is expected? ● What are the adverse events anticipated? ● How much time is needed to detect outcomes in trial arms? ● What is the required sample size at onset of trial, taking into consideration potential loss to follow up during observation period?
33
All trials are based on
a large body of epidemiological, clinical, and behavioral evidence supporting the need for therapy/intervention
34
inclusion/exclusion criteria
1) Want a sample most efficient for answering the clinical question 2) eligibility is pre-established 3) These criteria will impact sample size required to detect an association between intervention and outcomes of interest
35
the more strict the eligibility criteria,
the less generalizable the results
36
eligibility is determined by
● Ensure participants meet criteria for intervention ● By some sociodemographic characteristics and other health-related events (absence of contraindications etc.) ● Exclude those with difficulty in complying ● Exclusions made to control error
37
Baseline assessments are done to
characterize the study cohort
38
the first table of a final report of any RCT typically compares the baseline characteristics between
the two study groups (intervention and control condition) to ensure that randomization worked!
39
what characterizes study cohort?
● Identifying information (name, address, ID#) ● Demographics (age, race, gender, etc.) ● Clinical factors of relevance
40
Trial outcomes or endpoints
1) Primary endpt 2) Secondary endpt
41
Primary endpoint:
morbidity or mortality specific endpoint
42
Secondary endpoint:
disease indicator, health behavior, etc. * Selection of the “best” endpoint is often complicated so investigators may sometimes choose surrogate endpoints
43
example of a primary endpt
most current clinical trials of HIV disease therapy is CD4 cell count or HIV viral load
44
example of a secondary endpt
may be an AIDS defining event or death
45
Study protocols ensure that
all trial steps are fully documented and adhered to by staff over entire study period
46
adherence to the intervention protocol:
● Measuring compliance of once a day vs. complex medication schedules via self-report, pill counts or urinary metabolite levels (e.g. MEMS caps) ● Behavioral intervention should be well tolerated – are participants attending all sessions?
47
Rigorous monitoring and assessments of all endpoints at all follow visits:
Surveys, medical exams, tests, etc. to assess for outcomes and factors associated with the intervention and the trial outcomes
48
measuring endpoints:
Time at risk measured via person-time
49
Trial retention:
1) Call participants the day before clinical visits 2) Provide reimbursement
50
Frequency and duration of follow up depend upon:
1) Type of endpoint (e.g., response to treatment, development of new disease, progression of disease, behavioral change, sustainability of change) 2) The level of risk (higher the risk, more frequent the follow up)
51
Losses to follow up (LTF) must be minimized because:
● Losses are often selective (e.g., high risk persons drop out of trials) and this introduces bias ● Losses to follow up should be comparable in the intervention and control arms to avoid biased comparisons ● Losses to follow up reduce study power by reducing the person-time of observation
52
Ethical considerations
1) Equipoise 2) Informed consent 3) Stopping rules
53
stopping rules comprise of
Based on interim analyses, before the trial is over, to determine if the intervention is 1. beneficial (and should not be withheld from placebo group) or 2. harmful (and trial should be stopped) or 3. the evidence is inconclusive (and trial should continue) Data Safety and Monitoring Boards (DSMB) Many trials have been interrupted prior to completion
54
Analyzing by intention to treat
● A procedure in the conduct and analysis of RCTs ● All participants allocated to a given arm of the trial are analyzed together as representing that trial arm irrespective of whether they received or completed the prescribed treatment ● If the analysis is not based on original assignment, you are breaking the randomization and the groups may not be comparable anymore ● Results can be invalidated
55
Assessing associations in RCTs
1) Outcomes at fixed points in time 2) Events measured in person-time 3) Time-to-event
56
Outcomes at fixed points in time:
● What is proportion with outcome at each follow up ● Logistic regression (Odds ratio)
57
Events measured in person-time:
Rate of outcomes per 100 person years ● Poisson regression, incidence rate ratio (IRR)
58
Time to event
Time from enrollment until outcome ● Cox proportional hazard regression, hazards ratio (HR) ● Kaplan-Meier survival analyses, log rank test
59
RCT advantages
● Can demonstrate cause-effect relationships ● May be faster & cheaper than cohort studies (i.e. observing whether smoking cessation programs reduce smoking) ● Allow investigators to control exposure levels as needed
60
RCT disadvantages
Only ethically appropriate approach for some research questions ● More resource intensive ● Many interventions not suitable for blinding ● Tested interventions may be different from common practice ● Limited generalizability due to the use of volunteers, eligibility criteria, and loss to follow-up
61
random error arises from two different processes:
1) random processes 2) sample size
62
random processes
- random variation in sampling methods, data collection, interpretation causing results to change unpredictably
63
sample size based errors means
as sample size increases, likelihood of random error decreases
64
systemic error is also known as
bias
65
what does bias stem from?
design, conduct, or analysis of a study that results in a mistaken estimate of an exposure's effect on disease
66
positive bias is AWAY or TOWARD the null?
AWAY
67
with positive bias, observed value is greater (stronger) or smaller (weaker) than true value
greater (stronger)
68
with neg bias, observed value is greater (stronger) or smaller (weaker) than true value
smaller (weaker)
69
major types of bias:
1) selection bias 2) confounding 3) information bias 4) missing data
70
selection bias is the
flawed selection of study participants
71
information bias is the
flaws/inaccuracies in the measurement or classification of relevant info
72
systemic error in selection bias
selecting, enrolling, retaining study participants into 1 or more study groups (such as cases + controls or exposed + unexposed)
73
selection bias due to (what) in a cohort study
differential loss to follow-up
74
selection bias due to (what) in a case-control study
non-response bias sometimes cases may be less interested/unable to participate in case-control study given their disease status
75
what is measurement bias?
systematic error in obtaining information regarding subjects in the study
76
what might happen with measurement bias?
1) cases may be misclassified as control and vice versa 2) exposed may be misclassified as unexposed and vise versa
77
information bias or measurement error is always a
threat
78
if you don't measure exposures, other factors and outcomes correctly...
your ability to make inferences is severely compromised
79
in observational studies, measurement issues may be more likely to...
arise especially when dealing with medical records
80
potential sources of information bias or measurement error:
1) respondent 2) data collection 3) data managers 4) data analyst 5) study investigator
81
some types of information bias:
1) bias from surrogate interviews 2) surveillance bias 3) recall bias 4) reporting bias
82
differential misclassification bias:
information errors occurring different for the groups ARE BEING COMPARED
83
non-differential misclassification bias:
information errors occurring different for the groups ARE NOT BEING COMPARED
84
differential misclassification results in what kind of bias?
bias away from the null
85
what if exposure misclassification is similar in both cases and controls?
non-differential misclassification usually, biases estimate of association towards 1 (the null)
86
how to reduce info bias?
1) precise operation definitions of variables 2) detailed measurement protocols 3) repeated measurements on key variables 4) training, certification, re-certification of study staff 5) data audits (of interviewers and data centers) 6) data cleaning - visual, computer 7) re-running all analyses prior to publication
87
evaluation selection & information bias
1) why did it occur 2) what effect does it have on the observed association 3) what could have been done to control for bias in this study to prevent i.t. in the future? 4) always report potential sources of bias
88
missing data in observational studies
1) incomplete exposure/covariate data at baseline 2) missing outcomes possible in a trial
89
in a cohort study setting, assignment of exposure/treatment is...
out of investigator's control
90
effect measure modification
EM occurs when relatonship b/w two factors is different across diff levels of a third factor
91
in order to understand effect modification, we must specify...
what is being modified
92
measures of association for effect measure
risk ratio rate ratio odd ratios
93
unlike confounding, crude estimate is NOT used to
evaluate the presence of interaction
94
to evaluation interaction:
- compare stratum-specific estimates directly to look for differences across exposure levels - allows you to assess heterogeneity of effects (diffs in effect measure estimates)
95
confounder depends on distribution of
the confounder among strata of the eposure of interest
96
confounder obscures/distors the
true relationship b/w an exposure and outcome
97
confounder is what kind of effect to be adjusted for?
nuisance
98
effect measure modifies is an inherent feature of the
strata to be described alters effect in size/direction among strata
99
measures of association are a way to quantify
the relationship b/w an exposure and a disease
100
if an association is present, we have to determine if the
exposure if a truly a cause of disease
101
guidelines for casual inference
temporality strength of association biological gradient cessation of exposure replication of findings coherence w/ established facts biological plausibility consideration of alternate explanations specificity of association
102
temporality states that exposure must
precede disease in diseases with long latency periods, exposures must precede latency period
103
strength of association is reflected in the
value of the measure of association
104
biological gradient is the changes in
level of exposure and how they're related to changes in risk of disease
105
cessation of exposure is the risk of disease expected to
decline/end when exposure to a cause is reduce/eliminated
106
replication/consistency is the relationship between an
exposure and outcome is demonstrated in multiple studies and therefore more likely to be causal
107
coherence is when a relation is casual, you would expect to find
observed findings consistent with other epi and biologic knowledge
108
biological plausibility is the proposed mechanism that should be
etiologically plausible
109
what is biologic plausibility a reference to?
a "coherent" body of knowledge
110
alternate explanations is the extent to which an investigator has
ruled out other possible expanatons
111
alternate explanations are reports that comes from
methodologically sound studies with no potential residual confounding
112
specificity of association is the
specific exposure associated with only one disease
113
specificity of association is a holder from
germ theory of disease
114
caveat of specificity of association
many exposures are linked to multiple disease
115
caveat of alternate explanations
alternate explanations are limited by understanding of biology and sophistication of analysis
116
biologic plausibility caveat
problematic for new types of causes
117
caveat of coherence
data may not be available yet to directly support proposed mechanism
118
caveat of replication/consistency
sometimes there can be good reasons for why study results differ
119
caveat of cessation of exposure
removal of class does not reduce disease risk
120
strength of association caveat
weak associations may be casual but it's harder to rule out bias and confounding
121
casual inference says that a cause of a specific disease event is an
antecedent events/condition/characteristic necessary for the disease when it occurred and without the antecedent event, the disease event may have never occurred
122
what does "multicausality" mean
- very few exposures cause disease entirely by themselves - disease processes tend to be multifactorial
123
a component cause is any of a
set of conditions necessary for the completion of a sufficient cause
124
a necessary cause is a component because
that is a member of every sufficient cause
125
limitations of sufficient cause model
- omits discussion of origins of causes - focuses on proximal causes - does not consider factors that control distribution of risk factors - ignored dynamic non-linear relations
126
steps in outbreak investigation
- verify dx - confirm outbreak status - case definition - descriptive epi - develop hypothesis - test hypothesis - refine hypothesis + conduct additional studies - implement control + prevention measures - communicate findings
127
verify diagnosis (dx)
- ask if this is a known agent/disease - what did we know about its transmission at this time - what do we know about identification and dx
128
confirm outbreak via real causes
increase in population size changes in population characteristics random variation outbreak
129
confirm outbreak via artificial causes
increases culturing of stools new testing protocol contamination of cultures changes in reporting procedures
130
rule out other possible causes (real + artificial) for observed increase
- no substantial changes in pop size - no appreciable changes in population characteristics - no lab-based changes with respect to surveillance/testing - reporting protocol
131
a case definition is a
standard set of criteria for deciding whether an individual should be classified as having the disease of interest or not
132
developing a case definition also asks what are the (blank) of this agent?
characteristics
133
advantages of case definition
lab confirmation increases specificity of case definition
134
increased specificity =
increased likelihood of correctly identifying true negative persons
135
reducing misclassification =
if more likely to identify true negatives, then less likely to include them as cases
136
disadvantages to a case defintion
- lab confirmation excludes patients who didn't seek care/not tested/testing with or without limitations - limits cases
137
general surveillance strategies
1) active surveillance 2) passive surveillance 3) syndromic surveillance 4) specialized surveillance systems
138
requirements for lab-based surveillance
1) routine checks for viruses/bacteria/other pathogens 2) lab serotyping provides info about cases likely linked to common source 3) requirement: resources, facilities, training
139
descriptive epidemiology (characterizing cases)
time --> epidemic curves how to set them what they tell you
140
what can epidemic curves tell you
model of transmission propogated source common source timing of exposure course of exposure
141
major types of epi outbreak curves (common source)
point exposure intermittent exposure
142
types of epi outbreak cures (propagated source)
single case/exposure secondary and tertiary cases
143
common source can help identify
incubation period for new agents time of exposure for known agents
144
developing a hypothesis
- conduct survey/interviews - can productive interviews be done even with a limited number of patients - detailed surveys (sociodemographic info, clinical details, exposure history)
145
test hypothesis step
determined what type of epi study design
146
primary attack rates - cohort study
calculated among persons who acquire disease directly associated with an exposure
147
secondary attack - cohort study
calculated among persons who acquire disease from exposure to primary case - estimates spread of disease in a family/household/dorm/etc - measure of infectivity of agent + effects of prophylactic agents (like vaccines)
148
primary attack rate =
of people at risk who develop illness / total # of people at risk
149
food-specific attack rate =
of people who ate specific food and develop illness / total # of people who ate the food
150
secondary attack rate =
total # of cases - initial cases / # of susceptible persons in group - initial cases
151
attack rate ratio is aka
risk ratio
152
attack rate for exposed
(a/a+b)
153
attack rate for unexposed
(c/c+d)
154
attack rate ratio aka risk ratio
(a/a+b) / (c/c+d)
155
selection of controls for outbreak
2 controls selected for every case (matched wrt to age group and sex) IDed via random digit dialing
156
ascertainment of exposure info
cases = ask about the 7 days before illness onset controls = asked about 7 days before interview and 7 days before onset of illness
157
refine hypothesis and conduct studies
1) what control measures to consider 2) what further studies can be done? - traceback study - applied research
158
traceback study and results are often necessary to
identify sources of contamination and remove a public health threat ascertain the distribution + production chain for a food product to facilitate effective recall clarify the point/points at which implicated food became contaimined
159
implement control and prevention measures
two levels - immediate problem must be deal with - pathogenic issue (larger issue)