CAP 7: Diagnostic Studies Flashcards

(25 cards)

1
Q

How would you evaluate the quality of a new diagnostic test?

A

By comparing it to the independently established gold standard diagnostic test.

E.g., CAGE questionnaire must be compared to the CDT (carbohydrate deficient transferrin) for diagnosis alcohol dependence. Must be applied to all patients in the study regardless of diagnosis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is selection bias/ spectrum bias in diagnostic tests?

A

🧪 Selection Bias / Spectrum Bias in Diagnostic Tests

🧠 Explanation (High-school level)

Selection bias (also called spectrum bias) in diagnostic testing occurs when the people included in the study used to evaluate the test are not representative of the real patients who will receive the test in practice.

The key idea from the image is that the study population should resemble the population where the test will actually be used. If the groups are very different—for example, if the study includes mostly very sick patients while real-world patients have milder disease—the test may appear more accurate than it truly is.

This happens because disease prevalence and severity influence how well a test performs. If the study sample has a very high or very low prevalence compared with real practice, the test’s ability to correctly identify disease (predictive value) may change.

In short:

👉 Spectrum bias = the test is studied in the wrong type of patients.

🏥 Clinical Example (1–3 sentences)

Imagine researchers develop a diagnostic test for borderline personality disorder and test it only in patients admitted to a specialist personality disorder unit, where the condition is very common. The test may look extremely accurate there, but when used in a general outpatient clinic, where the condition is less common and symptoms are milder, the accuracy may be much lower.

🧸 Explain it Like You’re 10 (1–3 sentences)

Imagine testing a metal detector only on a beach where you already know lots of treasure is buried. It will look like the detector finds treasure all the time. But if you use it in a regular park, it may not work nearly as well.

🧠 Memory Hook

“Test it on the right crowd — or the results will be loud but wrong.” 🎭

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is observer bias in the use of diagnostic tests?

A

👀 Observer Bias in a Diagnostic Test

🧠 Explanation (High-school level)

Observer bias occurs when the person performing or interpreting a diagnostic test is influenced by what they already know about the patient. If the investigator or outcome assessor knows the patient’s diagnosis or clinical history, their expectations can unconsciously influence how they interpret the test result or record the findings.

The key point from the image is that investigators and outcome assessors should ideally be blinded (kept unaware) of the patient’s diagnosis when administering or interpreting the diagnostic test. This helps prevent their expectations from influencing the result.

If blinding is not used, the observer may look harder for abnormal findings in patients they believe have the disease, which can make the test appear more accurate than it actually is.

🧪 Related Concepts Mentioned in the Image

⚠️ Work-up Bias (Verification Bias)

This occurs when the gold standard test (the definitive test for confirming a disease) is only performed in some patients, usually those with a positive screening test.

Because the gold standard is less often used in patients with negative results, some cases of disease may be missed. This can make the test appear better at detecting disease than it really is.

Example:
If a new blood test for colon cancer is positive, doctors perform a colonoscopy (the gold standard), but if the blood test is negative, they do not. Some cancers in the negative group might therefore be missed.

🧬 Will Rogers Phenomenon

This occurs when better diagnostic tests reclassify patients into more accurate disease stages. When comparing outcomes between hospitals or studies that use different diagnostic tools, the improved staging may make survival statistics look better or worse, even though the actual patient outcomes have not changed.

🏥 Clinical Example (Observer Bias)

A radiologist is reviewing brain MRI scans for multiple sclerosis lesions. If they already know that a patient has been diagnosed with multiple sclerosis, they may scrutinise the scan more closely and be more likely to label small findings as lesions, which can inflate the apparent accuracy of the test.

🧸 Explain it Like You’re 10

Imagine a teacher marking a test knows which student usually gets top grades. They might accidentally look more kindly at that student’s answers, even if another student wrote the same thing.

🧠 Memory Hook

“If the observer knows the answer, the test may get extra ‘points’.” 🎯

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the role of sensitivity and specificity in a diagnostic test.

A

🧪 Sensitivity and Specificity in Diagnostic Tests
🧠 Explanation (High-school level)

When doctors use a diagnostic test, they want to know how good the test is at correctly identifying who has a disease and who does not. Two important measures help with this:

  • Sensitivity
  • Specificity

These measures are calculated by comparing the test results to the gold standard, which is the best available method for determining whether someone truly has the disease.

Researchers often organise the results in a 2 × 2 table:
(See attached graphics)

False positives and false negatives can have serious consequences, so doctors must balance sensitivity and specificity depending on the condition and the risks of further tests.

⚖️** Why Both Matter**

There is often a trade-off between sensitivity and specificity.

For dangerous diseases, doctors may prefer high sensitivity to avoid missing cases.

For conditions where treatment is risky or invasive, doctors may prefer high specificity to avoid unnecessary treatment.

🏥** Clinical Example**

Doctors use a D-dimer blood test when evaluating possible pulmonary embolism (blood clot in the lungs).
The test is very sensitive, so if the result is negative, doctors can confidently rule out a clot without further invasive testing.

🧸** Explain it Like You’re 10**

Imagine a smoke alarm in your house.
A very sensitive alarm goes off whenever there is smoke, so it rarely misses a real fire.
A very specific alarm only goes off when there is an actual fire, not when someone burns toast.

🧠 Memory Hook
“Sensitive tests hate missing disease, specific tests hate accusing the innocent.” 🚨

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain what sensitivity is, and its use in determining the quality of a diagnostic test.

A

🔍 Sensitivity
Definition

Sensitivity is the ability of a test to correctly detect people who actually have the disease.

Formula: (see attached image)
* Sensitivity = a/(a+c)
–> a = true positive
–> c = false negative

This means:
True positives ÷ all people who truly have the disease

A highly sensitive test rarely misses disease.

SnNOUT Rule - important clinical rule.
Sn = Sensitive
N = Negative
OUT = rule disease out

👉 If a highly sensitive test is negative, the disease is very unlikely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain what specificity is, and its use in determining the quality of a diagnostic test.

A

🎯 Specificity
Definition

Specificity is the ability of a test to correctly identify people who do NOT have the disease.

Formula: (see attached image)
* Specificity = d/(d+b)
–> d = true negative
–> b = false positive

This means:
True negatives ÷ all people who truly do not have the disease
A highly specific test rarely gives false positives.

SpPIN Rule

The image also highlights another key rule:

SpPIN
Sp = Specific
P = Positive
IN = rule disease in

👉 If a highly specific test is positive, the disease is very likely present.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the positive predictive value?
How do you calculate the positive predictive value?

A

✅ Positive Predictive Value (PPV)

🧠 Explanation (High-school level)

Positive Predictive Value (PPV) tells us the probability that a person actually has a disease if their test result is positive.

In simple terms, it answers the question:

👉 “If the test says someone has the disease, how likely is it that they really do?”

The image highlights that PPV asks: if someone tests positive, what is the chance they truly have the condition?
It also notes that PPV changes depending on how common the disease is in the population (the disease prevalence).

If the disease is common, a positive test is more likely to be correct.
If the disease is rare, many positive results may actually be false positives.

📊 The 2 × 2 Table
(See image)

🧮 How to Calculate PPV
PPV = a/ (a+b), where
* a = people who tested positive and truly have the disease
* b = people who tested positive but do not actually have the disease

So PPV is:

👉 True positives ÷ all positive test results

This tells us how trustworthy a positive result is.

🏥 Clinical Example

Imagine a screening test for breast cancer.

  • 80 women test positive and truly have cancer (true positives)
  • 20 women test positive but do not have cancer (false positives)

[PPV = 80/(80+20) = 80/100 = 0.80]

So the PPV = 80%, meaning that if the test is positive, there is an 80% chance the person actually has breast cancer.

🧸 Explain it Like You’re 10

Imagine a treasure detector on a beach.

If the detector beeps, PPV tells us how often there really is treasure there, instead of just a soda can.

🧠 Memory Hook

“Positive test? PPV asks: ‘Positive… but is it really?’” 🕵️‍♂️

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the negative predictive value?
How do you calculate the negative predictive value?

A

❌ Negative Predictive Value (NPV)

🧠 Explanation (High-school level)

Negative Predictive Value (NPV) tells us the probability that a person truly does NOT have a disease if their test result is negative.

In simple terms, it answers the question:

👉 “If the test says someone does NOT have the disease, how likely is it that they really don’t?”

The image explains this idea as: when a test result is negative, what is the chance the person truly does not have the condition?

Like positive predictive value, NPV depends on disease prevalence.
If a disease is rare, most negative results will be correct, so NPV is high. If the disease is very common, a negative result may be less reassuring.

📊 The 2 × 2 Table
(See image attached)

🧮 How to Calculate Negative Predictive Value

[NPV = \frac{d}{d+c}]

Where:

  • d = people who tested negative and truly do not have the disease
  • c = people who tested negative but actually do have the disease

So NPV is:

👉 True negatives ÷ all negative test results

This tells us how trustworthy a negative test result is.

🏥 Clinical Example

Doctors often use the D-dimer test to evaluate possible pulmonary embolism (blood clot in the lungs). If the test result is negative, the NPV is very high, meaning the patient very likely does not have a clot, and further testing may not be needed.

🧸 Explain it Like You’re 10

Imagine a metal detector on the beach.

If it doesn’t beep, NPV tells you how sure you can be that there really isn’t treasure buried there.

🧠 Memory Hook

“NPV = Negative means ‘No Problem Verified.’” 🧘‍♂️

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the likelihood ratio of a positive test?
What does it mean? How do you calculate it?

A

➕ Likelihood Ratio of a Positive Test (LR+)

🧠 Explanation (High-school level)

The Likelihood Ratio of a Positive Test (LR+) tells us how much more likely a positive test result is in someone who actually has the disease compared with someone who does not have the disease.

In other words, it answers the question:

👉 “If a test result is positive, how strongly does that increase the chance that the person truly has the disease?”

The image explains this idea as:
How much more likely is a positive test to occur in a person with the condition than in someone without it?

A large LR+ means the test is very good at confirming disease.

General interpretation:

  • LR+ > 10 → strong evidence for disease
  • LR+ 5–10 → moderate evidence
  • LR+ 2–5 → small increase in probability
  • LR+ ≈ 1 → test doesn’t change the probability much

🧮 How to Calculate LR+

The formula shown in the image is: (See attached image)

[LR+ = \frac{Sensitivity}{1 - Specificity}]

Where:

  • Sensitivity = ability of the test to correctly detect people with the disease
  • Specificity = ability of the test to correctly identify people without the disease
  • 1 − Specificity = false positive rate

So the formula compares:

👉 True positive rate ÷ false positive rate

This tells us how much a positive test result increases the likelihood of disease.

🏥 Clinical Example

Doctors sometimes use troponin tests to diagnose a heart attack. Troponin has a high LR+, meaning that if the test is positive, it is much more likely to occur in someone having a heart attack than in someone who is not, making the diagnosis much more likely.

🧸 Explain it Like You’re 10

Imagine a metal detector looking for treasure.

If the detector beeps mostly when treasure is really there and almost never when there isn’t treasure, then a beep strongly means you probably found treasure.

🧠 Memory Hook

“LR+ asks: if the test shouts ‘YES!’, how loud is the truth?” 🔊

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the likelihood ratio of a negative test test?
What does it mean? How do you calculate it?

A

➖ Likelihood Ratio of a Negative Test (LR−)

🧠 Explanation (High-school level)

The Likelihood Ratio of a Negative Test (LR−) tells us how much a negative test result lowers the chance that a person has a disease.

Another way to say it is:

👉 How much more likely is a negative test result to occur in someone without the disease compared with someone who actually has the disease?

A small LR− means a negative test strongly suggests the person does NOT have the disease.

General interpretation:

  • LR− < 0.1 → strong evidence the disease is absent
  • LR− 0.1–0.2 → moderate evidence
  • LR− 0.2–0.5 → small decrease in probability
  • LR− ≈ 1 → test does not change the probability much

So the smaller the LR−, the better the test is at ruling out disease.

🧮 How to Calculate LR−

(See image)

📊 Using Likelihood Ratios to Update Disease Probability
(See image)

🏥 Clinical Example

Doctors often use a D-dimer blood test when evaluating possible pulmonary embolism (a blood clot in the lungs). The test has a very low LR−, meaning that if the result is negative, the probability of a clot becomes extremely low and doctors can safely rule it out.

🧸 Explain it Like You’re 10

Imagine a metal detector looking for treasure on a beach.

If the detector does not beep, LR− tells us how confident we can be that there really isn’t treasure there.

🧠 Memory Hook

“LR− means: if the test says ‘NO’, how much should the disease go?” 🚪

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the formula to calculate sensitivity?

A

Sensitivity = True Positive/ (True Positive + False Negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the formula to calculate specificity?

A

Specificity = True negative/ (True Negative + False Positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the formula to calculate positive predictive value?

A

PPV = True Positive / (True Positive + False Positive)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the formula to calculate negative predictive value?

A

NPV = True Negative / (True Negative + False Negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the formula to calculate likelihood ratio of a positive test?

A

LR of positive test = Sensitivity/ (1 - Specificity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the formula to calculate likelihood ratio of negative test?

A

LR of negative test = (1-Sensitivity)/ Specificity

17
Q

What are:
* Pre-test odds

How would you calculate it?

A

🎯 Pre-Test Odds

🧠 Explanation (High-school level)

Pre-test odds describe the chance that a person has a disease before any diagnostic test result is known. Doctors estimate this using things like symptoms, medical history, physical examination, and how common the disease is in the population.

The image shows the formula used to calculate pre-test odds from pre-test probability:

(See image)

This simply converts probability (a percentage chance) into odds, which are easier to use in calculations when applying likelihood ratios to update disease probability after a test.

So the steps are:

1️⃣ Estimate the pre-test probability of disease
2️⃣ Convert it into pre-test odds using the formula above
3️⃣ Multiply by a likelihood ratio to calculate the post-test odds

In short:

👉 Pre-test odds = the starting odds that someone has the disease before testing.

🏥 Clinical Example (1–3 sentences)

A patient arrives at the emergency department with sudden chest pain and shortness of breath. Based on their symptoms, risk factors, and clinical scoring systems, a doctor estimates the pre-test probability of pulmonary embolism to be 20%. This probability can then be converted into pre-test odds before applying diagnostic tests such as a D-dimer.

🧸 Explain it Like You’re 10 (1–3 sentences)

Imagine guessing whether a treasure chest is buried on a beach before using a metal detector. You look at clues like old pirate maps and footprints. Pre-test odds are your best guess before you start digging.

🧠 Memory Hook

“Pre-test odds = your best medical guess before the test does the talking.” 🎤

18
Q

What is the formula for testing pre-test odds?

19
Q

What are:
* Post-test odds?

How would you calculate it?

A

🎯 Post-Test Odds

🧠 Explanation (High-school level)

Post-test odds describe the chance that a person has a disease after the result of a diagnostic test is known.

Doctors start with an initial estimate of disease likelihood (called the pre-test odds) based on symptoms, history, and how common the disease is. Then they update that estimate using the test result.

The image shows the key formula:

(See attached image)

In simple terms:

1️⃣ Start with the pre-test odds (how likely the disease seemed before testing).
2️⃣ Multiply by the likelihood ratio of the test result.
3️⃣ This gives the post-test odds, which represent the new probability of disease after the test result is considered.

So post-test odds help doctors combine clinical judgement with test results to estimate how likely the disease really is.

🏥 Clinical Example (1–3 sentences)

A patient arrives with symptoms suggesting pulmonary embolism (PE). Based on clinical assessment, the doctor estimates a moderate pre-test probability of PE. After a negative D-dimer test with a very low LR−, the post-test odds drop significantly, making PE very unlikely.

🧸 Explain it Like You’re 10 (1–3 sentences)

Imagine you think there might be treasure buried on a beach. Then you use a metal detector. The detector beep changes how confident you are about the treasure—that new level of confidence is the post-test odds.

🧠 Memory Hook

“Post-test odds = your medical guess after the test speaks.” 🎤

20
Q

What is the formula for post test odds?

21
Q

What is post-test probability? How would you calculate it?

A

🎯 Post-Test Probability

🧠 Explanation (High-school level)

Post-test probability is the chance that a person actually has a disease after the result of a diagnostic test is known.

Doctors begin with an initial estimate of disease likelihood based on symptoms and risk factors (called the pre-test probability). After performing a diagnostic test, they update this estimate using likelihood ratios and post-test odds. The final result—how likely the disease is after the test result is considered—is called the post-test probability.

The image shows how to convert post-test odds into post-test probability using this formula:

(See attached image)

1️⃣ Estimate pre-test probability of disease
2️⃣ Convert it to pre-test odds
3️⃣ Multiply by a likelihood ratio → gives post-test odds
4️⃣ Convert those odds to post-test probability

In simple terms:

👉 Post-test probability = the final chance a patient has the disease after considering the test result.

🏥 Clinical Example (1–3 sentences)

A patient comes to the emergency department with symptoms suggesting pulmonary embolism. Based on clinical assessment, the doctor estimates a 20% pre-test probability. After a negative D-dimer test, the post-test probability becomes extremely low, allowing the doctor to safely rule out a blood clot.

🧸 Explain it Like You’re 10 (1–3 sentences)

Imagine you think there might be treasure buried on a beach. Then you use a metal detector to check. After hearing whether it beeps or stays silent, you update your guess about whether treasure is really there—that final guess is the post-test probability.

🧠 Memory Hook

“Post-test probability = the doctor’s final guess after the test gives its opinion.” 🩺🎤

22
Q

What is the formula for post-test probability?

A

See attached image.

23
Q

What is the likelihood ratio nomogram? Explain how you would interpret it.

A

📊 Likelihood Ratio Nomogram

A likelihood ratio nomogram is a visual tool doctors use to estimate the probability that a patient has a disease after a diagnostic test result—without needing to do complex calculations.

It combines three things:

1️⃣ Pre-test probability – how likely the disease seemed before the test
2️⃣ Likelihood ratio (LR) – how strongly the test result changes the probability
3️⃣ Post-test probability – the updated probability after the test result

Doctors normally calculate this mathematically:

  • Post-test odds = Pre-test odds × Likelihood ratio

The nomogram lets you do this graphically using a straight line instead of formulas.

🧭 How to Read the Nomogram

The diagram has three vertical columns:

Left column: Pre-test probability (%)
Middle column: Likelihood ratio
Right column: Post-test probability (%)

Step-by-step

1️⃣ Find the pre-test probability on the left column
This is your estimate of disease likelihood based on symptoms, history, and prevalence.

2️⃣ Find the likelihood ratio (LR) in the middle column
Use LR+ for a positive test or LR− for a negative test.

3️⃣ Draw a straight line connecting the two points
Extend that line across to the right column.

4️⃣ Read where the line crosses the right column
That number is the post-test probability — the updated chance that the patient has the disease.

🏥 Clinical Example

A patient arrives with symptoms suggesting pulmonary embolism.

  • Pre-test probability = 30%
  • D-dimer test has LR− = 0.1

On the nomogram:

  • Start at 30% on the left
  • Draw a line through 0.1 on the LR column
  • The line crosses the right side at about 3%

👉 After the negative test, the probability of pulmonary embolism drops from 30% → ~3%, making the diagnosis very unlikely.

🧸 Explain it Like You’re 10

Imagine you’re guessing whether treasure is buried on a beach.

  • First you make a guess based on clues (pre-test probability).
  • Then you use a metal detector (the test).
  • The nomogram helps you figure out how much your guess should change after hearing the beep.

🧠 Memory Hook

“Left = your guess, middle = the test power, right = your new guess.” 🎯

24
Q

What is the receiver operating curve?

A

📈 Receiver Operating Characteristic (ROC) Curve

A Receiver Operating Characteristic (ROC) curve is a graph used to evaluate how good a diagnostic test is at distinguishing between people who have a disease and those who do not.

It helps researchers choose the best cut-off point for a test (for example, deciding what blood level counts as “positive”).

The name originally came from radar and sonar detection during World War II, where engineers used similar graphs to detect signals from noise.

🧠 Simple Explanation (High-school level)

An ROC curve shows the trade-off between sensitivity and specificity for a diagnostic test.

The graph has two axes:

  • Y-axis: Sensitivity
    (True positive rate – how good the test is at detecting disease)
  • X-axis: 1 − Specificity
    (False positive rate – how often the test incorrectly says someone has disease)

Each point on the curve represents a different cut-off value of the diagnostic test.

For example:

  • Lower threshold → higher sensitivity but more false positives
  • Higher threshold → higher specificity but more false negatives

⭐ How to Interpret the ROC Curve

1️⃣ Perfect test

A perfect test would go:

  • Straight up the Y-axis to sensitivity = 1
  • Then straight across the top

This means the test detects all disease and has no false positives.

2️⃣ Line of no discrimination

A diagonal line from:

(0,0) → (1,1)

represents a completely useless test.

This means the test performs no better than random guessing.

3️⃣ Better tests move toward the upper-left corner

The closer the curve moves toward the upper-left corner, the better the test.

This means:

  • High sensitivity
  • High specificity

📊 Area Under the Curve (AUC)

The most important statistic from an ROC curve is the Area Under the Curve (AUC).

  • AUC = 1.0 → perfect test
  • AUC = 0.5 → useless test (random guessing)

A larger AUC means a better diagnostic test.

In the image:

  • The blue curve has a larger AUC than the red curve
  • Therefore the blue test is better

In one sentence:
A ROC curve is a graph that shows how well a diagnostic test separates people with a disease from those without it by comparing sensitivity and false positive rate across different test thresholds.

25
How would you determine the optimal cut off point for a test using the receiver operating curve? (ROC)
📈 Determining the Best Cut-off Point Using an ROC Curve 🧠 Explanation (High-school level) Many diagnostic tests measure a **continuous value** (for example, blood sugar, PSA level, or number of screening questions answered “yes”). To decide whether the test result is **positive or negative**, doctors must choose a **cut-off value (threshold)**. An **ROC curve helps determine the best cut-off point** by showing how **sensitivity and specificity change at different thresholds**. Key idea from the image (reworded): * Each point on the curve represents a **different cut-off value** of the test. * Changing the cut-off changes the **balance between sensitivity and specificity**. * The **best cut-off is usually the point that provides the best trade-off between sensitivity and specificity**. For example in the image: * One cut-off gives **very high sensitivity (95%) but poor specificity (45%)**, meaning many false positives. * Another cut-off provides a **better balance**, where both sensitivity and specificity are reasonably high. This balanced point is typically **closest to the upper-left corner of the ROC graph**, which represents: ✔ high sensitivity ✔ high specificity ✔ fewer false positives and false negatives --- 🎯 Key Principle When choosing the best cut-off using an ROC curve: 1️⃣ Plot **sensitivity (true positive rate)** on the **Y-axis** 2️⃣ Plot **1 − specificity (false positive rate)** on the **X-axis** 3️⃣ Each test threshold produces a **different point on the curve** 4️⃣ The **optimal cut-off is usually the point closest to the upper-left corner** However, the **best cut-off also depends on the clinical purpose of the test**: * If **missing disease is dangerous** (e.g., cancer screening) → choose a cut-off with **very high sensitivity** * If **false positives cause harm or invasive testing** → choose a cut-off with **higher specificity** So sometimes doctors intentionally **trade specificity for sensitivity** or vice versa. --- 🏥 Clinical Example The **CAGE questionnaire** is used to screen for alcohol misuse. A score of **2 or more positive answers** is often chosen as the best cut-off because it provides a **good balance between sensitivity and specificity**, identifying most patients with alcohol problems without producing too many false positives. --- 🧸 Explain it Like You're 10 Imagine a **metal detector looking for treasure**. * If it’s **too sensitive**, it beeps at every bottle cap. * If it’s **too strict**, it misses real treasure. The best setting is the one that **finds most treasure but doesn’t beep all the time**. --- ✅ **In one sentence:** An **ROC curve helps doctors choose the best test threshold by finding the cut-off that gives the best balance between sensitivity and specificity for the purpose of the test.**