Recall
The proportion of actual positive cases that were correctly identified, measuring how completely positives are found.
In-Depth Explanation
Recall (also called sensitivity or true positive rate) measures how many of the actual positive cases the model found. It answers: "Of all the positives, how many did we catch?"
Formula: Recall = True Positives / (True Positives + False Negatives)
When to prioritise recall:
- Missing positives is costly
- Want to find all cases of something
- False negatives are dangerous
Examples:
- Cancer screening (missing cancer is worse than extra tests)
- Security threats (missing attacks is catastrophic)
- Fraud detection (catching all fraud may be priority)
Trade-off with precision:
- Higher recall usually means lower precision
- Lower threshold catches more but more false alarms
- Choose based on error costs
Business Context
Prioritise recall when missing positive cases is costly or dangerous. Cast a wide net to catch everything important.
How Clever Ops Uses This
We help Australian businesses set recall requirements based on the cost of missed detections in their specific domain.
Example Use Case
"A safety defect detection system prioritising recall to catch all potential safety issues, accepting some false alarms for thorough coverage."
Frequently Asked Questions
Related Terms
Related Resources
Precision
The proportion of true positive predictions among all positive predictions, meas...
F1 Score
The harmonic mean of precision and recall, providing a single metric that balanc...
Confusion Matrix
A table showing predicted vs actual classifications, revealing true positives, f...
Learning Centre
Guides, articles, and resources on AI and automation.
AI & Automation Services
Explore our full AI automation service offering.
AI Readiness Assessment
Check if your business is ready for AI automation.
