How would you evaluate the quality of a new diagnostic test?
By comparing it to the independently established gold standard diagnostic test.
E.g., CAGE questionnaire must be compared to the CDT (carbohydrate deficient transferrin) for diagnosis alcohol dependence. Must be applied to all patients in the study regardless of diagnosis.
What is selection bias/ spectrum bias in diagnostic tests?
🧪 Selection Bias / Spectrum Bias in Diagnostic Tests
🧠 Explanation (High-school level)
Selection bias (also called spectrum bias) in diagnostic testing occurs when the people included in the study used to evaluate the test are not representative of the real patients who will receive the test in practice.
The key idea from the image is that the study population should resemble the population where the test will actually be used. If the groups are very different—for example, if the study includes mostly very sick patients while real-world patients have milder disease—the test may appear more accurate than it truly is.
This happens because disease prevalence and severity influence how well a test performs. If the study sample has a very high or very low prevalence compared with real practice, the test’s ability to correctly identify disease (predictive value) may change.
In short:
👉 Spectrum bias = the test is studied in the wrong type of patients.
🏥 Clinical Example (1–3 sentences)
Imagine researchers develop a diagnostic test for borderline personality disorder and test it only in patients admitted to a specialist personality disorder unit, where the condition is very common. The test may look extremely accurate there, but when used in a general outpatient clinic, where the condition is less common and symptoms are milder, the accuracy may be much lower.
🧸 Explain it Like You’re 10 (1–3 sentences)
Imagine testing a metal detector only on a beach where you already know lots of treasure is buried. It will look like the detector finds treasure all the time. But if you use it in a regular park, it may not work nearly as well.
🧠 Memory Hook
“Test it on the right crowd — or the results will be loud but wrong.” 🎭
What is observer bias in the use of diagnostic tests?
👀 Observer Bias in a Diagnostic Test
🧠 Explanation (High-school level)
Observer bias occurs when the person performing or interpreting a diagnostic test is influenced by what they already know about the patient. If the investigator or outcome assessor knows the patient’s diagnosis or clinical history, their expectations can unconsciously influence how they interpret the test result or record the findings.
The key point from the image is that investigators and outcome assessors should ideally be blinded (kept unaware) of the patient’s diagnosis when administering or interpreting the diagnostic test. This helps prevent their expectations from influencing the result.
If blinding is not used, the observer may look harder for abnormal findings in patients they believe have the disease, which can make the test appear more accurate than it actually is.
🧪 Related Concepts Mentioned in the Image
⚠️ Work-up Bias (Verification Bias)
This occurs when the gold standard test (the definitive test for confirming a disease) is only performed in some patients, usually those with a positive screening test.
Because the gold standard is less often used in patients with negative results, some cases of disease may be missed. This can make the test appear better at detecting disease than it really is.
Example:
If a new blood test for colon cancer is positive, doctors perform a colonoscopy (the gold standard), but if the blood test is negative, they do not. Some cancers in the negative group might therefore be missed.
🧬 Will Rogers Phenomenon
This occurs when better diagnostic tests reclassify patients into more accurate disease stages. When comparing outcomes between hospitals or studies that use different diagnostic tools, the improved staging may make survival statistics look better or worse, even though the actual patient outcomes have not changed.
🏥 Clinical Example (Observer Bias)
A radiologist is reviewing brain MRI scans for multiple sclerosis lesions. If they already know that a patient has been diagnosed with multiple sclerosis, they may scrutinise the scan more closely and be more likely to label small findings as lesions, which can inflate the apparent accuracy of the test.
🧸 Explain it Like You’re 10
Imagine a teacher marking a test knows which student usually gets top grades. They might accidentally look more kindly at that student’s answers, even if another student wrote the same thing.
🧠 Memory Hook
“If the observer knows the answer, the test may get extra ‘points’.” 🎯
Explain the role of sensitivity and specificity in a diagnostic test.
🧪 Sensitivity and Specificity in Diagnostic Tests
🧠 Explanation (High-school level)
When doctors use a diagnostic test, they want to know how good the test is at correctly identifying who has a disease and who does not. Two important measures help with this:
These measures are calculated by comparing the test results to the gold standard, which is the best available method for determining whether someone truly has the disease.
Researchers often organise the results in a 2 × 2 table:
(See attached graphics)
False positives and false negatives can have serious consequences, so doctors must balance sensitivity and specificity depending on the condition and the risks of further tests.
⚖️** Why Both Matter**
There is often a trade-off between sensitivity and specificity.
For dangerous diseases, doctors may prefer high sensitivity to avoid missing cases.
For conditions where treatment is risky or invasive, doctors may prefer high specificity to avoid unnecessary treatment.
🏥** Clinical Example**
Doctors use a D-dimer blood test when evaluating possible pulmonary embolism (blood clot in the lungs).
The test is very sensitive, so if the result is negative, doctors can confidently rule out a clot without further invasive testing.
🧸** Explain it Like You’re 10**
Imagine a smoke alarm in your house.
A very sensitive alarm goes off whenever there is smoke, so it rarely misses a real fire.
A very specific alarm only goes off when there is an actual fire, not when someone burns toast.
🧠 Memory Hook
“Sensitive tests hate missing disease, specific tests hate accusing the innocent.” 🚨
Explain what sensitivity is, and its use in determining the quality of a diagnostic test.
🔍 Sensitivity
Definition
Sensitivity is the ability of a test to correctly detect people who actually have the disease.
Formula: (see attached image)
* Sensitivity = a/(a+c)
–> a = true positive
–> c = false negative
This means:
True positives ÷ all people who truly have the disease
A highly sensitive test rarely misses disease.
SnNOUT Rule - important clinical rule.
Sn = Sensitive
N = Negative
OUT = rule disease out
👉 If a highly sensitive test is negative, the disease is very unlikely.
Explain what specificity is, and its use in determining the quality of a diagnostic test.
🎯 Specificity
Definition
Specificity is the ability of a test to correctly identify people who do NOT have the disease.
Formula: (see attached image)
* Specificity = d/(d+b)
–> d = true negative
–> b = false positive
This means:
True negatives ÷ all people who truly do not have the disease
A highly specific test rarely gives false positives.
SpPIN Rule
The image also highlights another key rule:
SpPIN
Sp = Specific
P = Positive
IN = rule disease in
👉 If a highly specific test is positive, the disease is very likely present.
What is the positive predictive value?
How do you calculate the positive predictive value?
✅ Positive Predictive Value (PPV)
🧠 Explanation (High-school level)
Positive Predictive Value (PPV) tells us the probability that a person actually has a disease if their test result is positive.
In simple terms, it answers the question:
👉 “If the test says someone has the disease, how likely is it that they really do?”
The image highlights that PPV asks: if someone tests positive, what is the chance they truly have the condition?
It also notes that PPV changes depending on how common the disease is in the population (the disease prevalence).
If the disease is common, a positive test is more likely to be correct.
If the disease is rare, many positive results may actually be false positives.
📊 The 2 × 2 Table
(See image)
—
🧮 How to Calculate PPV
PPV = a/ (a+b), where
* a = people who tested positive and truly have the disease
* b = people who tested positive but do not actually have the disease
So PPV is:
👉 True positives ÷ all positive test results
This tells us how trustworthy a positive result is.
🏥 Clinical Example
Imagine a screening test for breast cancer.
[PPV = 80/(80+20) = 80/100 = 0.80]
So the PPV = 80%, meaning that if the test is positive, there is an 80% chance the person actually has breast cancer.
🧸 Explain it Like You’re 10
Imagine a treasure detector on a beach.
If the detector beeps, PPV tells us how often there really is treasure there, instead of just a soda can.
🧠 Memory Hook
“Positive test? PPV asks: ‘Positive… but is it really?’” 🕵️♂️
What is the negative predictive value?
How do you calculate the negative predictive value?
❌ Negative Predictive Value (NPV)
🧠 Explanation (High-school level)
Negative Predictive Value (NPV) tells us the probability that a person truly does NOT have a disease if their test result is negative.
In simple terms, it answers the question:
👉 “If the test says someone does NOT have the disease, how likely is it that they really don’t?”
The image explains this idea as: when a test result is negative, what is the chance the person truly does not have the condition?
Like positive predictive value, NPV depends on disease prevalence.
If a disease is rare, most negative results will be correct, so NPV is high. If the disease is very common, a negative result may be less reassuring.
📊 The 2 × 2 Table
(See image attached)
—
🧮 How to Calculate Negative Predictive Value
[NPV = \frac{d}{d+c}]
Where:
So NPV is:
👉 True negatives ÷ all negative test results
This tells us how trustworthy a negative test result is.
🏥 Clinical Example
Doctors often use the D-dimer test to evaluate possible pulmonary embolism (blood clot in the lungs). If the test result is negative, the NPV is very high, meaning the patient very likely does not have a clot, and further testing may not be needed.
🧸 Explain it Like You’re 10
Imagine a metal detector on the beach.
If it doesn’t beep, NPV tells you how sure you can be that there really isn’t treasure buried there.
🧠 Memory Hook
“NPV = Negative means ‘No Problem Verified.’” 🧘♂️
What is the likelihood ratio of a positive test?
What does it mean? How do you calculate it?
➕ Likelihood Ratio of a Positive Test (LR+)
🧠 Explanation (High-school level)
The Likelihood Ratio of a Positive Test (LR+) tells us how much more likely a positive test result is in someone who actually has the disease compared with someone who does not have the disease.
In other words, it answers the question:
👉 “If a test result is positive, how strongly does that increase the chance that the person truly has the disease?”
The image explains this idea as:
How much more likely is a positive test to occur in a person with the condition than in someone without it?
A large LR+ means the test is very good at confirming disease.
General interpretation:
🧮 How to Calculate LR+
The formula shown in the image is: (See attached image)
[LR+ = \frac{Sensitivity}{1 - Specificity}]
Where:
So the formula compares:
👉 True positive rate ÷ false positive rate
This tells us how much a positive test result increases the likelihood of disease.
🏥 Clinical Example
Doctors sometimes use troponin tests to diagnose a heart attack. Troponin has a high LR+, meaning that if the test is positive, it is much more likely to occur in someone having a heart attack than in someone who is not, making the diagnosis much more likely.
🧸 Explain it Like You’re 10
Imagine a metal detector looking for treasure.
If the detector beeps mostly when treasure is really there and almost never when there isn’t treasure, then a beep strongly means you probably found treasure.
🧠 Memory Hook
“LR+ asks: if the test shouts ‘YES!’, how loud is the truth?” 🔊
What is the likelihood ratio of a negative test test?
What does it mean? How do you calculate it?
➖ Likelihood Ratio of a Negative Test (LR−)
🧠 Explanation (High-school level)
The Likelihood Ratio of a Negative Test (LR−) tells us how much a negative test result lowers the chance that a person has a disease.
Another way to say it is:
👉 How much more likely is a negative test result to occur in someone without the disease compared with someone who actually has the disease?
A small LR− means a negative test strongly suggests the person does NOT have the disease.
General interpretation:
So the smaller the LR−, the better the test is at ruling out disease.
🧮 How to Calculate LR−
(See image)
📊 Using Likelihood Ratios to Update Disease Probability
(See image)
—
🏥 Clinical Example
Doctors often use a D-dimer blood test when evaluating possible pulmonary embolism (a blood clot in the lungs). The test has a very low LR−, meaning that if the result is negative, the probability of a clot becomes extremely low and doctors can safely rule it out.
🧸 Explain it Like You’re 10
Imagine a metal detector looking for treasure on a beach.
If the detector does not beep, LR− tells us how confident we can be that there really isn’t treasure there.
🧠 Memory Hook
“LR− means: if the test says ‘NO’, how much should the disease go?” 🚪
What is the formula to calculate sensitivity?
Sensitivity = True Positive/ (True Positive + False Negative)
What is the formula to calculate specificity?
Specificity = True negative/ (True Negative + False Positive)
What is the formula to calculate positive predictive value?
PPV = True Positive / (True Positive + False Positive)
What is the formula to calculate negative predictive value?
NPV = True Negative / (True Negative + False Negative)
What is the formula to calculate likelihood ratio of a positive test?
LR of positive test = Sensitivity/ (1 - Specificity)
What is the formula to calculate likelihood ratio of negative test?
LR of negative test = (1-Sensitivity)/ Specificity
What are:
* Pre-test odds
How would you calculate it?
🎯 Pre-Test Odds
🧠 Explanation (High-school level)
Pre-test odds describe the chance that a person has a disease before any diagnostic test result is known. Doctors estimate this using things like symptoms, medical history, physical examination, and how common the disease is in the population.
The image shows the formula used to calculate pre-test odds from pre-test probability:
(See image)
This simply converts probability (a percentage chance) into odds, which are easier to use in calculations when applying likelihood ratios to update disease probability after a test.
So the steps are:
1️⃣ Estimate the pre-test probability of disease
2️⃣ Convert it into pre-test odds using the formula above
3️⃣ Multiply by a likelihood ratio to calculate the post-test odds
In short:
👉 Pre-test odds = the starting odds that someone has the disease before testing.
🏥 Clinical Example (1–3 sentences)
A patient arrives at the emergency department with sudden chest pain and shortness of breath. Based on their symptoms, risk factors, and clinical scoring systems, a doctor estimates the pre-test probability of pulmonary embolism to be 20%. This probability can then be converted into pre-test odds before applying diagnostic tests such as a D-dimer.
🧸 Explain it Like You’re 10 (1–3 sentences)
Imagine guessing whether a treasure chest is buried on a beach before using a metal detector. You look at clues like old pirate maps and footprints. Pre-test odds are your best guess before you start digging.
🧠 Memory Hook
“Pre-test odds = your best medical guess before the test does the talking.” 🎤
What is the formula for testing pre-test odds?
See image.
What are:
* Post-test odds?
How would you calculate it?
🎯 Post-Test Odds
🧠 Explanation (High-school level)
Post-test odds describe the chance that a person has a disease after the result of a diagnostic test is known.
Doctors start with an initial estimate of disease likelihood (called the pre-test odds) based on symptoms, history, and how common the disease is. Then they update that estimate using the test result.
The image shows the key formula:
(See attached image)
In simple terms:
1️⃣ Start with the pre-test odds (how likely the disease seemed before testing).
2️⃣ Multiply by the likelihood ratio of the test result.
3️⃣ This gives the post-test odds, which represent the new probability of disease after the test result is considered.
So post-test odds help doctors combine clinical judgement with test results to estimate how likely the disease really is.
🏥 Clinical Example (1–3 sentences)
A patient arrives with symptoms suggesting pulmonary embolism (PE). Based on clinical assessment, the doctor estimates a moderate pre-test probability of PE. After a negative D-dimer test with a very low LR−, the post-test odds drop significantly, making PE very unlikely.
🧸 Explain it Like You’re 10 (1–3 sentences)
Imagine you think there might be treasure buried on a beach. Then you use a metal detector. The detector beep changes how confident you are about the treasure—that new level of confidence is the post-test odds.
🧠 Memory Hook
“Post-test odds = your medical guess after the test speaks.” 🎤
What is the formula for post test odds?
See image.
What is post-test probability? How would you calculate it?
🎯 Post-Test Probability
🧠 Explanation (High-school level)
Post-test probability is the chance that a person actually has a disease after the result of a diagnostic test is known.
Doctors begin with an initial estimate of disease likelihood based on symptoms and risk factors (called the pre-test probability). After performing a diagnostic test, they update this estimate using likelihood ratios and post-test odds. The final result—how likely the disease is after the test result is considered—is called the post-test probability.
The image shows how to convert post-test odds into post-test probability using this formula:
(See attached image)
1️⃣ Estimate pre-test probability of disease
2️⃣ Convert it to pre-test odds
3️⃣ Multiply by a likelihood ratio → gives post-test odds
4️⃣ Convert those odds to post-test probability
In simple terms:
👉 Post-test probability = the final chance a patient has the disease after considering the test result.
🏥 Clinical Example (1–3 sentences)
A patient comes to the emergency department with symptoms suggesting pulmonary embolism. Based on clinical assessment, the doctor estimates a 20% pre-test probability. After a negative D-dimer test, the post-test probability becomes extremely low, allowing the doctor to safely rule out a blood clot.
🧸 Explain it Like You’re 10 (1–3 sentences)
Imagine you think there might be treasure buried on a beach. Then you use a metal detector to check. After hearing whether it beeps or stays silent, you update your guess about whether treasure is really there—that final guess is the post-test probability.
🧠 Memory Hook
“Post-test probability = the doctor’s final guess after the test gives its opinion.” 🩺🎤
What is the formula for post-test probability?
See attached image.
What is the likelihood ratio nomogram? Explain how you would interpret it.
📊 Likelihood Ratio Nomogram
A likelihood ratio nomogram is a visual tool doctors use to estimate the probability that a patient has a disease after a diagnostic test result—without needing to do complex calculations.
It combines three things:
1️⃣ Pre-test probability – how likely the disease seemed before the test
2️⃣ Likelihood ratio (LR) – how strongly the test result changes the probability
3️⃣ Post-test probability – the updated probability after the test result
Doctors normally calculate this mathematically:
The nomogram lets you do this graphically using a straight line instead of formulas.
🧭 How to Read the Nomogram
The diagram has three vertical columns:
Left column: Pre-test probability (%)
Middle column: Likelihood ratio
Right column: Post-test probability (%)
Step-by-step
1️⃣ Find the pre-test probability on the left column
This is your estimate of disease likelihood based on symptoms, history, and prevalence.
2️⃣ Find the likelihood ratio (LR) in the middle column
Use LR+ for a positive test or LR− for a negative test.
3️⃣ Draw a straight line connecting the two points
Extend that line across to the right column.
4️⃣ Read where the line crosses the right column
That number is the post-test probability — the updated chance that the patient has the disease.
🏥 Clinical Example
A patient arrives with symptoms suggesting pulmonary embolism.
On the nomogram:
👉 After the negative test, the probability of pulmonary embolism drops from 30% → ~3%, making the diagnosis very unlikely.
🧸 Explain it Like You’re 10
Imagine you’re guessing whether treasure is buried on a beach.
🧠 Memory Hook
“Left = your guess, middle = the test power, right = your new guess.” 🎯
What is the receiver operating curve?
📈 Receiver Operating Characteristic (ROC) Curve
A Receiver Operating Characteristic (ROC) curve is a graph used to evaluate how good a diagnostic test is at distinguishing between people who have a disease and those who do not.
It helps researchers choose the best cut-off point for a test (for example, deciding what blood level counts as “positive”).
The name originally came from radar and sonar detection during World War II, where engineers used similar graphs to detect signals from noise.
🧠 Simple Explanation (High-school level)
An ROC curve shows the trade-off between sensitivity and specificity for a diagnostic test.
The graph has two axes:
Each point on the curve represents a different cut-off value of the diagnostic test.
For example:
⭐ How to Interpret the ROC Curve
1️⃣ Perfect test
A perfect test would go:
This means the test detects all disease and has no false positives.
2️⃣ Line of no discrimination
A diagonal line from:
(0,0) → (1,1)
represents a completely useless test.
This means the test performs no better than random guessing.
3️⃣ Better tests move toward the upper-left corner
The closer the curve moves toward the upper-left corner, the better the test.
This means:
📊 Area Under the Curve (AUC)
The most important statistic from an ROC curve is the Area Under the Curve (AUC).
A larger AUC means a better diagnostic test.
In the image:
✅ In one sentence:
A ROC curve is a graph that shows how well a diagnostic test separates people with a disease from those without it by comparing sensitivity and false positive rate across different test thresholds.