FINAL EXAM Flashcards

(117 cards)

1
Q

types of reviews

A
  • traditional reviews (narrative reviews, critical reviews, integrative reviews)
  • systematic reviews (systematic reviews, meta-analysis, umbrella reviews)
  • qualitative & mixed method reviews (rapid reviews, scoping reviews, realistic reviews, meta-ethnographies)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is a systematic review?

A

a review that attempts to identify, appraise and synthesise all the empirical evidence that meets pre-specified eligibility criteria to answer a specific research question.
- use explicit, systematic methods (selected to aim to minimise bias + produce more reliable findings)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

meta-analysis: what is it

A

a systematic review (identify, appraising, and synthesising) + the statistical combination of results from two or more seperate studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

quantitative synthesis in meta-analysis

A

summary effect
- compute effect size and variance from each primary study
- compute weighted averages of ES across studies (giving more weight to more precise estimates - lower variance)
heterogeneity
- assess and characterise variation across set of effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

strengths of meta-analysis

A
  • systematic
  • comprehensive
  • quantitative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

weaknesses of meta-analysis

A
  • garbage in, garbage out (small sample sizes and poor quality of data mean synthesis will not be strong)
  • apples and oranges (seek comparability, characterise variation) - very different studies pooled together and differences not pointed out
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are rapid reviews?

A

type of evidence synthesis that summarises information from different research studies to produce evidence for people such as the public, healthcare providers, researchers, policy makers, and funders in a systematic, resource efficient manner
- speeding up the ways we plan, do, and/or share the results of conventional structured reviews (simplifying/omitting a variety of methods)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

why do we need reviews?

A

Cochrane - review-level evidence is the most robust form of research to inform decision-making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

quality of evidence triangle

A

meta-analysis
systematic reviews
critically appraised literature
RCTs
Non-RCTs
cohort studies
case series/studies
individual case reports
background info/expert opinion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how to achieve quality

A
  • transparency
  • strong methods
  • calibration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

transparency

A
  • open science practices, pre-registration, registered reports, reported conflict of interest, public peer review history
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

strong methods

A
  • does study match research question/aims
  • are the results and conclusions believable
  • validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

4 types of validity

A
  • construct: measuring what they say they are measuring
  • internal: valid causal claims
  • external: valid generalisability
  • statistical: valid statistical conclusions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

calibration

A
  • not the first, not the best
  • exaggerated or grandiose claims call for scepticism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

stages of a systematic review

A
  1. develop RQ and review approach
  2. identify resources and build a search strategy
  3. analyse and report results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

stages of a meta-analysis (7)

A
  1. formulating the problem
  2. literature search
  3. screening/filtering
  4. coding
  5. statistical considerations
  6. interpreting results
  7. presenting synthesis processes and results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

meta-analysis step 1: formulating the problem

A
  • purpose
  • constructs and operations
  • typ answers found in Introduction (+ some Method)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

meta-analysis step 2: literature search

A
  • comprehensive
  • kinds of literature (grey v published)
  • typ found in Method (+ some of intro)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

meta-analysis step 3: screening/filtering

A
  • removes duplicatesn
  • clear eligibility criteria (inclusion/exclusion, titles/abstracts, then full reports)
  • found in Method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

meta-analysis step 4: coding

A
  • extract relevant info (statistical, study characteristics)
  • account for study quality
  • interrater reliability
  • found in Method and Intro
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

meta-analysis step 5: statistical considerations

A
  • effect size and variances
  • models (fixed v random effects)
  • heterogeneity and moderator analyses)
  • found in Method + Results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

meta-analysis step 6: interpreting results

A
  • significance and size
  • heterogeneity and publication bias (want symmetry)
  • forest plots
  • funnel plots (triangle shape - studies at the top are larger, should be more precise in capturing true effect)
  • found in Results + Discussion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

meta-analysis step 7: presenting synthesis processes and results

A
  • clarity and transparency
  • adhere to standards
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

rapid reviews - what are they?

A
  • systematic reviews under restrictions (time, resources)
  • systematic and explicit methods to appraise, extract, analyse data. specific components of the process are restricted/omitted, scope narrowed
  • time: no longer than 6 months
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
examples of shortcuts for rapid reviews
- more targeted research question - reduce list of sources - narrowing timeframe - only recent published literature - english only studies - exclusion of 'grey literature' - ine reviewer - using just one database
26
recommendations for rapid reviews
- restrictions must be clearly delineated, justified, reported - justification necessary
27
what is data mining?
- can mean mining data from web e.g., social media, reddit, API's - can also mean trying to mine data for insights - ask questions about these insights and test hypotheses
28
advantages of data mining
- real-world happenings (not experiment in lab) - get more data - see trends better and more reliably - automated - less biased than manual collection
29
disadvantages of data mining
- junk on the web (poor quality of data) - biases - from social media data (not perfectly representative of different demographics unless controlled for) - not always easy to identify gender, location, etc
30
social media data ethics
- someone posting on social media does not mean they have given you consent to use data - social media platforms have terms of service and govern what you are allowed to do w data
31
web scraping - what is it
- extracting data that you want from websites, in a structured format. - often when you want to compare something e.g., articles, tweets, posts - tags describe what is on the page - run this code
32
Application Programming Interfaces (APIs)
- way for machines to communicate with each other
33
practicalities of APIs
- to collect data from an API you need to make an account with that platform so they can keep track of how much data you are using - you need to make/apply for a key/token to access API (track for when quota used up, how much data has been collected) - different tools that will help get access to API e.g., postman, chrome developer tools...
34
research issues to consider for API research
- peripheral users under-represented in API results - may not know quality of sample - hard to remove "spambots" (flood platforms w posts - advertising, politics, misinformation/disinformation), fake accounts - ethical concerns of informed consent
35
relational analysis - networks
- social network analysis focuses on the relations between, not within, people - network comprises nodes - maps different entities to one another in important ways e.g., how different groups of people interact - patterns of interaction mapped - at scale, start to identify communities of people/other interesting patterns within these broader networks
36
sentiment analysis + applications
- computer-aided detection and analysis of emotion in texts - "opinion mining" applications - monitor customer satisfaction - recommending of services - political forecasting - sociological research - how do people use emotional language online?
37
topic modelling in social media research
- a machine learning technique to discover abstract topics w collection of documents - automatically identifies patterns of word clusters and their distributions - algorithms like Latent Dirichlet Allocation (LDA) are commonly used
38
LDA's
- can do qualitative datasets - can identify themes (won't tell you what theme is though, but you can identify) - quantitatively derived conclusions, qualitative in what they are showing
39
AI: Large Language Models
- Useful for identifying people, places, attitudes, references to change etc
40
types of social media analyses
- descriptive studies - can provide insight into personality and individual characteristics - how they unfold in the real world - prediction - monitor mood, metrics, sleep etc - opportunities for identifying contextual aspects that influence individual and community outcomes
41
tokenisation
- process of splitting posts or sentences into meaningful tokens or words - needs to be sensitive to misuse of language common in social media - tokens needs to be converted into numbers (freq of words/categories can occur)
42
challenges w processing and analysing data
- memory and storage - language use and ambiguity - model error - overfitting (reduce number of predictors) - regularisation and variable selection (multicollinearity remains a problem)
43
ecological fallacies
making conclusions about individuals based on grouped data
44
exception fallacies
conclusions about groups based on exceptional cases
45
what do we measure with self-reports
- interests - personality/traits - motivation/goals - abilities/skills - intentions/plans - behaviours - feelings/emotions - wellbeing - satisfaction - attitudes
46
what makes a good measure
- reliability (repeatable) - validity (measuring construct you want to measure)
47
criticisms of self-reports
not objective subject to - cognitive biases (limitations in ability to provide accurate/useful responses) - motivational biases (goals conflict w inclination to provide accurate/useful responses) - response/stylistic biases (ways we interpret/use scales)
48
types of cognitive biases (3)
1. recall bias 2. gaps in self-knowledge 3. reference group effects
49
recall bias (cognitive bias) + solution
misremembering past events - solution: reduce time-frame and thus reliance on memory (make it concrete) e.g., Experience Sampling Methods
50
gaps in self-knowledge (cognitive bias) + solution
we don't always know ourselves - solution: target concrete behaviours/experiences rather than more abstract or evaluative content; consider informant-reports in addition to self-reports (SOKA model) - evaluative content can be problematic because it is asking about something with strong norms about it being desirable/good
51
The Self-Other Knowledge Asymmetry (SOKA) Model
- unobservable, non-evaluative content (low tension bw reporting accurately and inaccurately) - self-knowledge is high - observable, evaluative content - other's know us better
52
reference group effects (cognitive biases)
most self-reports are relative assessments, but to whom? - may impact the mean but typically does not impact within-sample correlations; may not have a strong impact at all - impact of cross-cultural comparisons (different frames of reference) - Simpson's Paradox - solutions: specify the relevant group + take care when moving bw levels of analysis
53
types of motivational bases (2)
1. socially desirable responding or "faking" 2. demand characteristics
54
social desirable responding or "faking"
distorting one's responses to create a positive impression
55
potential solutions to socially desirable responding
- warnings, oaths/pledges - measuring (and controlling for) social desirability (indirect - social desirability scales; direct - change from baseline when prompted to fake good) - fake-proof scales (limited to personality assessment - e.g., pick best descriptor from a range)
56
consequences of socially desirable responding
- internal/construct validity - restriction of range (poor - if trying to use measure to make decision about who is selected for a particular role, not helpful if everyone responds similarly) - personality as a "signal" - social intelligence (faking may convey some social awareness of the role)
57
demand characteristics + solutions
participant tries to provide the 'right' answer e.g., repeated sampling may set an expectation for experiences to vary more or change over time, even when they may not - when participant knows hypothesis, evidence for hypothesis strengthens - solution: neutral framing of survey info; assurance of anonymity
58
types of response/stylistic biases
1. acquiescence 2. extreme responding 3. order effects 4. midpoint responding 5. careless/inattentive responding 6. framing effects
59
response styles definition
'trait-like' features of the individual - habitual ways of responding to self-report measures
60
response set definition
'state-like' responses to the survey of situation - a style of responding that has been cued/triggered at that moment by something in the situation
61
acquiescence + solution
tending to agree or respond 'yes' - or conversely, nay-saying - can be elicited by repeated positive/negatively framed items solution: include a balance of forward reverse scored items
62
extreme responding + solution
tending to select the most extreme options - often due to problematic Likert scale solution: use fewer an clearer Likert-scale points
63
order effects + solution
aka 'carry over' effects solution: counterbalance and randomise
64
midpoint responding + solution
tending to select the midpoint - commonly occurs when respondents have little knowledge or opinion about the content solution: educate respondents, use a Likert scale without a midpoint
65
careless/inattentive responding + solution
- often a result of survey burden or low participant motivation - clues include straightlining (select same response all the way down), speeding, and skipping solution: streamline surveys as much as possible, include rigorous quality control checks, build participant engagement motivation, screen responses
66
framing effects + solution
overlap with demand characteristics - normative anchoring - leading/loaded questions - double-barrelled questions
67
stated vs revealed preferences
- what people say they prefer/chose vs what their behaviour 'reveals' they prefer - stated preferences are valuable, but their interpretation may be ambiguous
68
strengths of self-report
- many constructs are inherently subjective - ownership/agency/dignity - validity - objective measures tend to offer minimal improvement beyond self-reports - practicality/efficiency - fairness
69
what constructs are inherently subjective
- goals - attitudes - beliefs - feelings/emotions - values - social experiences
70
ownership/agency/dignity in self-reports
a person is entitled to describe themselves
71
validity of self-reports
- stated and revealed preferences - intentions one of the most strongest predictors of behaviour ("intention-behaviour gap")
72
why do objective measures offer minimal improvement beyond self-reports
- "implicit measures" (response times, error rates) - mobile sensing measure - useful additions to self-reports, but few have displaced self-reports
73
the practicality/efficiency of self reports
- certain things such as conscientiousness - strong predictor of a variety of outcomes, and can be measured in less than 2 minutes.
74
fairness in self-reports
- can be standardised and applied systematically and equally to a large sample (not distorted by biases, mechanically impersonal nature)
75
Vroom's Expectancy (VIE) theory (in self-reports)
extrinsic reward presence - Valence - providing desirable responses and extrinsic reward must be regarded as more desirable than alternatives - Instrumentality - providing desirable responses must be perceived as somehow conducive to receiving the reward - Expectancy - respondents must believe they are likely to be successful in achieving the outcome of providing a desirable response
76
Dunlop et al. (2022) faking study in self-report
- perceived desirability of the opportunity was the strongest predictor of faking - higher ability participants also appeared to fake to a greater extent - when faking may represent an adaptive response to the situation
77
what insights does marketing research investigate
- brand loyalty (how brands perceived by consumers) - how customers choose brands - customer satisfaction - how brands communicate in the market - sizing an opportunity - need/gap in the market identification - why consumers switch to stem defection before it happens
78
offer optimisation framework
product lifecycle-----> idea --> development --> introduction (these 3 stages are in new product development) --> growth --> maturity --> decline
79
common research objectives in market research (new product development)
- understand barriers to overcome - optimal pricing strategy - preference of attributes - measure size of the opportunity - willingness to pay across the audience type
80
qualitative phase in market research
- understand market perception and experience - online communities can help
81
what is choice modelling?
- a quantitative research technique used to understand individual's preferences and their value placed on various product attributes in their purchase decision-making - evaluates the trade-offs that individuals make by studying the joint effect of multiple attributes of a product simultaneously - can determine relative importance of important choice attributes - through difficult trade-offs, can understand what individuals truly value
82
business outcomes for client from market research
essential elements of their go-to-market strategy surrounding - education - awareness - partnerships - media coverage
83
how to assess advertising impact
- efficiency - effectiveness
84
what is efficiency and how is it tested
create branded memories tested: via campaign evaluation and emotions metrics - cut-through (distinctive, entertaining) - brand linkage (branding - are cues + assets distinctive)
85
what is effectiveness and how is it tested
delivering business outcomes tested: via rational drivers - on message (aligned w brand pillars) - business outcomes
86
research in gov settings - purposes
- program design (best practice literature) - program operationalisation and implementation (feasibility) - program delivery (test specific RQs)
87
what is evaluation in gov settings
- often driven by policy makers/funders to make judgements about what works, for whom, in what settings, and other key questions (driven by stakeholders) - takes place in applied settings - can be done at beginning, during implementation, and at the end
88
reasons for evaluation
- planning and budgeting (funding decisions) - implementation/measurement (continuous quality improvements, meeting policy intent) - reporting and accountability (assessing impact, effective use of resources) - developing the body of evidence (inform what works in an applied setting; informs future efforts)
89
evaluation enablers
- data - capability within teams - leadership/executive buy-in - organisational culture
90
evaluation barriers
- funding - capability within organisation - data - analytical skills/expertise - design and practicality
91
why evaluate in policy
- necessary within the policy cycle - involves program review, report and modifications - can inform future adaptations of the program/solutions in similar settings
92
who does the research/evaluation in gov
- internal - external (outsourced)
93
evaluation in practice - steps
1. need for research/evaluation identified and project set up/commissioned 2. planning/scope determined (stakeholder analysis) 3. evaluation framework and plan developed (ethics) 4. data collection 5. data analysis 6. reporting
94
types of evaluations (4)
- formative evaluation - process/implementation evaluation - impact evaluation - outcome evaluation
95
formative evaluation
- info to plan, refine, improve an intervention - supports innovation development to guide adaptation to emergent and dynamic realities in complex environments - e.g., needs assessment, program logic maps, evaluability assessment
96
process/implementation evaluation
- measure activities that occur while a programme is running, identifying whether the separate components of a program and the program are being implemented as intended - e.g., qual data, administrative bi-product data, surveys, economic evaluation methods
97
impact evaluation
- measure the immediate effect of a programme (objectives) - e.g., qual data, admin bi-product data, surveys, economic evaluation methods
98
outcome evaluation
- measure the long-term effects of a programme (particularly its goals) - e.g., qual data, admin bi-product data, surveys, economic evaluation methods
99
methodologies/approaches in evaluation
- experimental design - quasi-experimental design - non-experimental design - participatory codesign - contribution analysis - indigenous/first nations evaluations
100
experimental design in evaluation
- causal relationship - change in desired outcome for participants in intervention vs control group
101
quasi-experimental design in evaluation
- typ used when experimental designs not feasible or ethical - some form of comparison group is possible - high quality can show causal link e.g., state comparisons
102
non-experimental design in evaluation
- descriptive/observational studies - no control group, but measure changes in participants before and after program implementation - or rely on qual data
103
participatory codesign
- evaluations designed w stakeholders - all stakeholders should have voice across all parts of project
104
contribution analysis
- evaluation in policy saturated spaces - how to understand the contribution (not attribution) of programs in complex settings (consider non-program factors that may also influence outcomes)
105
indigenous/first nations evaluations
- additional considerations such as ATSI leadership, data sovereignty, reciprocity, self-determination, transparency/accountability to communities
106
steps of contribution analysis (6)
1. set out the attribution problem to be addressed (specific cause-effect question being addressed, type of contribution, other influencing factors, plausibility of contribution) 2. develop a theory of change and risks to it (detail level, expected contribution of the programme, assumptions) 3. gather the existing evidence on the theory of change (logic of the links) 4. assemble and assess the contribution story and challenges to it (which links are strong, credibility, do stakeholders agree, weaknesses) 5. seek out additional evidence (new data, adjust theory, gather) 6. revise and strengthen the contribution story
107
program logic
- tool used in evaluation. outlines need/problem response (program/effort) inputs (resources) activities (part of response) outputs (can see and count) short-term outcomes (within first 12 months) medium-term outcomes (within first 1-3 years) long-term outcomes (3+ years) OR intermediate outcomes (things that need to happen before final outcomes can be achieved) final outcomes + non-program/external factors (outside of control of program)
108
evaluation framework - guide
- background info - program logic/theory of change - evaluation questions/TOR - data sources/requirements - governance - risk management - reporting supplemented by: evaluation plan, stakeholder/consultation plan
109
example of process evaluation questions
- has the program been implemented as intended? - are activities being delivered as intended? - are participants being reached?
110
example of impact evaluation questions
- are there changes in the outcomes anticipated - to what extent are these changes attributed to the program? - is the program the best use of resources for its costs/inputs? - what is the 'net benefit' of the program?
111
examples of developmental evaluation
- what is the problem to be solved? - what are the characteristics and needs of the target population? - what is the most appropriate plan of action to address the problem?
112
considerations of data sources and collections
- be pragmatic - think about completeness, availability, quality, and who is the data custodians - maximise use of existing data - minimise process burden on stakeholders - keep data collection and collation as simple and consistent as possible
113
primary data
- program specific data collection - e.g., surveys/validated measures; stakeholder consultations/qual interviews
114
secondary data
- activity data - planning and activity documents - administrative by-product data - web/training/activity analytics and social media monitoring - peer and grey literature - reporting/governance data most common
115
data analysis approaches
no single approach, dependent on the needs of the evaluation, availability of data, funding/budget, purpose, scope. most adopt mixed methods approaches - economic (cost-minimisation analysis, breakeven analysis...) - quantitative (descriptive, relationship analyses - correlation/regressions, group differences e.g., t-tests, ANOVA) - qualitative (thematic, causal layered, discourse analysis) - mixed-methods (triangulation of data from all sources)
116
key considerations for data analysis in applied settings
- practicality - "comparative to what" - indicators, are they policy relevant, aligned to goals/objectives, operationalised, risks - evaluation capability - in the team and client - evaluator skills - ethics and good practice - First Nations evaluation - translating evaluation findings/usability of findings
117
good practice in reporting
- consider relevance/expectations/needs of the target audience - defensible - clear rationale for the evaluative judgements reached (is there evidence to support it)