Accuracy
The proportion of correct predictions among total predictions, a basic metric for classification model evaluation.
In-Depth Explanation
Accuracy measures the proportion of correct predictions (both true positives and true negatives) out of all predictions made. It's the most intuitive but often misleading classification metric.
Formula: Accuracy = (True Positives + True Negatives) / Total Predictions
When accuracy works:
- Balanced classes
- Equal cost of errors
- Overall correctness matters
When accuracy misleads:
- Imbalanced classes (99% accuracy on 99/1 split is trivial)
- Different error costs (missing fraud vs false alarm)
- Specific class performance matters
Alternatives to consider:
- Precision and recall
- F1 score
- AUC-ROC
- Confusion matrix analysis
- Domain-specific metrics
Business Context
Accuracy is easy to understand but often inappropriate for real business problems where classes are imbalanced or error costs differ.
How Clever Ops Uses This
We help Australian businesses choose appropriate evaluation metrics beyond accuracy, ensuring AI models are measured on what matters.
Example Use Case
"Realising a 98% accurate fraud detection model is useless because it predicts "not fraud" for everything on a dataset with 2% fraud rate."
Frequently Asked Questions
Related Terms
Related Resources
Precision
The proportion of true positive predictions among all positive predictions, meas...
Recall
The proportion of actual positive cases that were correctly identified, measurin...
F1 Score
The harmonic mean of precision and recall, providing a single metric that balanc...
Learning Centre
Guides, articles, and resources on AI and automation.
AI & Automation Services
Explore our full AI automation service offering.
AI Readiness Assessment
Check if your business is ready for AI automation.
