The proportion of correct predictions among total predictions, a basic metric for classification model evaluation.
Accuracy measures the proportion of correct predictions (both true positives and true negatives) out of all predictions made. It's the most intuitive but often misleading classification metric.
Formula: Accuracy = (True Positives + True Negatives) / Total Predictions
When accuracy works:
When accuracy misleads:
Alternatives to consider:
Accuracy is easy to understand but often inappropriate for real business problems where classes are imbalanced or error costs differ.
We help Australian businesses choose appropriate evaluation metrics beyond accuracy, ensuring AI models are measured on what matters.
"Realising a 98% accurate fraud detection model is useless because it predicts "not fraud" for everything on a dataset with 2% fraud rate."