A

Accuracy

The proportion of correct predictions among total predictions, a basic metric for classification model evaluation.

In-Depth Explanation

Accuracy measures the proportion of correct predictions (both true positives and true negatives) out of all predictions made. It's the most intuitive but often misleading classification metric.

Formula: Accuracy = (True Positives + True Negatives) / Total Predictions

When accuracy works:

  • Balanced classes
  • Equal cost of errors
  • Overall correctness matters

When accuracy misleads:

  • Imbalanced classes (99% accuracy on 99/1 split is trivial)
  • Different error costs (missing fraud vs false alarm)
  • Specific class performance matters

Alternatives to consider:

  • Precision and recall
  • F1 score
  • AUC-ROC
  • Confusion matrix analysis
  • Domain-specific metrics

Business Context

Accuracy is easy to understand but often inappropriate for real business problems where classes are imbalanced or error costs differ.

How Clever Ops Uses This

We help Australian businesses choose appropriate evaluation metrics beyond accuracy, ensuring AI models are measured on what matters.

Example Use Case

"Realising a 98% accurate fraud detection model is useless because it predicts "not fraud" for everything on a dataset with 2% fraud rate."

Frequently Asked Questions

Category

data analytics

Need Expert Help?

Understanding is the first step. Let our experts help you implement AI solutions for your business.

Ready to Implement AI?

Understanding the terminology is just the first step. Our experts can help you implement AI solutions tailored to your business needs.

FT Fast 500 APAC Winner|500+ Implementations|Harvard-Educated Team