Precision
The proportion of true positive predictions among all positive predictions, measuring how reliable positive predictions are.
In-Depth Explanation
Precision measures how many of the positive predictions were actually correct. It answers: "When the model predicts positive, how often is it right?"
Formula: Precision = True Positives / (True Positives + False Positives)
When to prioritise precision:
- False positives are costly
- Action on positive prediction is expensive
- Want to be confident when flagging something
Examples:
- Spam detection (false positives lose important emails)
- Content moderation (false positives censor legitimate content)
- Medical testing (false positives cause unnecessary procedures)
Trade-off with recall:
- Higher precision usually means lower recall
- Adjusting threshold shifts the trade-off
- Choose based on business costs
Business Context
How Clever Ops Uses This
We help Australian businesses understand precision trade-offs, choosing thresholds that match business costs of different error types.
Example Use Case
"A fraud alert system prioritising precision to minimise false alarms that waste investigation resources and annoy legitimate customers."
Frequently Asked Questions
Related Terms
Related Resources
Recall
The proportion of actual positive cases that were correctly identified, measurin...
F1 Score
The harmonic mean of precision and recall, providing a single metric that balanc...
Accuracy
The proportion of correct predictions among total predictions, a basic metric fo...
Learning Centre
Guides, articles, and resources on AI and automation.
AI & Automation Services
Explore our full AI automation service offering.
AI Readiness Assessment
Check if your business is ready for AI automation.
