Choosing the right evaluation metric for your machine learning project is crucial, as it decides which model you’ll ultimately use. How do you choose an appropriate metric? This talk will explore the important evaluation metrics used in regression and classification tasks, their pros and cons, and how to make a smart decision.
In this talk, we'll go through evaluation metrics for regression tasks (R squared, MAE, MSE, RMSE, and RMSLE) and classification tasks (Classification accuracy, Precision, Recall, F1 Score, ROC/AUC, Precision/Recall AUC, Matthews Correlation Coefficient, and ways to extend some of these from binary to multiclass problems). I'll talk about the differences between them, the trade-offs, and when some may be more helpful than others.