Evaluation Metrics and Score:
1.
Accuracy:
- Definition: The ratio of correctly predicted instances to the total instances.
- Formula:
Accuracy = (TP + TN) / (TP + FP + TN + FN)
- For example, if a model classifies 90 out of 100 instances correctly, then its accuracy is 90%.
2.
Precision:
·
Definition: The ratio of correctly predicted positive
observations to the total predicted positives.
·
Formula:
Precision = TP / (TP + FP)
3.
Recall (Sensitivity or True Positive Rate):
- Definition: The ratio of correctly predicted positive observations to the all
observations in actual class.
- Formula:
Recall = TP / (TP + FN)
4.
F1 Score:
- Definition: The weighted average of precision and recall. It ranges from 0 to
1, where 1 is the best.
- Formula:
F1 = 2 * Precision * Recall
/ (Precision + Recall)
Here is an example of how to calculate accuracy, precision, recall, and F1 score for a binary classification model:
- True positives (TP): The number of instances that are correctly classified as positive.
- False positives (FP): The number of instances that are incorrectly classified as positive.
- True negatives (TN): The number of instances that are correctly classified as negative.
- False negatives (FN): The number of instances that are incorrectly classified as negative.
For example, suppose a model predicts that 30 instances are positive, and 25 of those are actually positive. Also, suppose that the model predicts that 20 instances are negative, and 15 of those are actually negative. Then, the accuracy, precision, recall, and F1 score for the model would be:
· Accuracy = (25 + 15) / (25 + 5 + 15 + 10) = 0.75
· Precision = 25 / (25 + 5) = 0.83
· Recall = 25 / (25 + 10) = 0.71
·
F1 = 2 * 0.83 *
0.71 / (0.83 + 0.71) = 0.76
0 Comments