Accuracy
The proportion of correct predictions among total predictions. A basic classification metric that can be misleading for imbalanced datasets.
In-Depth Explanation
Accuracy measures the percentage of correct predictions. While intuitive, it's often inadequate alone, especially for imbalanced classes.
Accuracy formula: Accuracy = (True Positives + True Negatives) / Total Predictions
When accuracy works well:
- Balanced classes (similar counts)
- All errors equally important
- Simple performance summary needed
When accuracy misleads:
- Imbalanced classes (99% accuracy on 1% fraud detection?)
- Different error costs (false negatives vs false positives)
- Multi-class with varying importance
Better alternatives:
- Precision and Recall
- F1 Score
- ROC-AUC
- Confusion matrix analysis
- Domain-specific metrics
Business Context
Don't be fooled by high accuracy. A fraud detector saying "not fraud" always achieves 99% accuracy but catches nothing. Choose metrics that reflect business impact.
How Clever Ops Uses This
We help Australian businesses select appropriate metrics for their AI projects, ensuring models optimise for actual business outcomes.
Example Use Case
"A model with 95% accuracy on customer churn sounds good, but if only 5% of customers churn, predicting "won't churn" for everyone achieves 95% accuracy too."
Frequently Asked Questions
Related Terms
Related Resources
Precision
Of all positive predictions, what proportion was actually positive. High precisi...
Recall
Of all actual positives, what proportion did the model identify. High recall mea...
F1 Score
The harmonic mean of precision and recall, providing a single metric that balanc...
Learning Centre
Guides, articles, and resources on AI and automation.
AI & Automation Services
Explore our full AI automation service offering.
AI Readiness Assessment
Check if your business is ready for AI automation.
