Precision, recall, and accuracy explained

When evaluating a model, three key metrics often come up:

  • Precision = How many of the items I labeled as positive were actually correct?
    Example: Model says 10 images show wild boars, 8 really do → Precision = 8/10 = 80%
  • Recall = How many of the actual positives did I successfully detect?
    Example: 10 wild boars in total, model finds 8 → Recall = 8/10 = 80%
  • Accuracy = How many total predictions were correct overall?
    Example: Out of 100 images, 90 were correctly classified (positive or negative) → Accuracy = 90/100 = 90%

:link: Accuracy is often the most used metric because it gives an overall sense of performance. But it can be misleading when classes are imbalanced (e.g., lots of non-boar images). That’s where precision and recall give deeper insight.

In short:

  • Precision = How right were the positives? (reduce false alarms)
  • Recall = How many positives did I catch? (reduce misses)
  • Accuracy = How often was the model right, overall?
1 Like