ml-metrics

Revisit the classical metrics in machine learning. (mainly cited from Koo Ping Shung)

Precision

You can see that Precision talks about how precise/accurate your model is out of those predicted positive, how many of them are actual positive.

In email spam detection, a false positive means that an email that is non-spam (actual negative) has been identified as spam (predicted spam). The email user might lose important emails if the precision is not high for the spam detection model.

Recall

Recall actually calculates how many of the Actual Positives our model capture through labeling it as Positive (True Positive)

Similarly, in sick patient detection. If a sick patient (Actual Positive) goes through the test and predicted as not sick (Predicted Negative). The cost associated with False Negative will be extremely high if the sickness is contagious.

F1

$F_1 = \frac{2}{\frac{1}{precision} + \frac{1}{recall}}$