F1 Score
The harmonic mean of precision and recall, providing a single metric that balances both. Useful when you need good performance on both false positives and false negatives.
In-Depth Explanation
The F1 score combines precision and recall into a single metric. The harmonic mean punishes extreme values, requiring both metrics to be high for a good F1.
F1 formula: F1 = 2 × (Precision × Recall) / (Precision + Recall)
F1 characteristics:
- Range: 0 to 1 (higher is better)
- Harmonic mean (not arithmetic)
- Low if either precision OR recall is low
- Best when both are balanced
When to use F1:
- Imbalanced datasets
- Both false positives and negatives matter
- Need single metric for model comparison
Variants:
- Macro F1: Average F1 across classes
- Micro F1: Global TP/FP/FN counts
- Weighted F1: Class-weighted average
Business Context
How Clever Ops Uses This
We typically report F1 scores for Australian business classification projects, especially when dealing with imbalanced outcomes.
Example Use Case
"Comparing fraud detection models: Model A has F1 of 0.82, Model B has 0.76. Model A better balances catching fraud while minimising false alerts."
Frequently Asked Questions
Related Terms
Related Resources
Precision
Of all positive predictions, what proportion was actually positive. High precisi...
Recall
Of all actual positives, what proportion did the model identify. High recall mea...
Accuracy
The proportion of correct predictions among total predictions. A basic classific...
Learning Centre
Guides, articles, and resources on AI and automation.
AI & Automation Services
Explore our full AI automation service offering.
AI Readiness Assessment
Check if your business is ready for AI automation.
