Evaluation Metrics: MAE
Evaluation metrics are essential
tools for assessing the performance of machine learning models, particularly in
regression tasks where you're predicting continuous values. Here are
descriptions of three common evaluation metrics: Mean Absolute Error (MAE),
Root Mean Square Error (RMSE), and R-squared (R2).
Mean Absolute Error (MAE):
MAE measures the average
absolute difference between the predicted values and the actual values.
where:
- n is the number of samples
- y^ is the predicted value
- yi is the actual value
If you add up all the errors
(the differences between predicted and actual values), take their absolute
values (so negative errors become positive), and then find the average. MAE
tells you how far off, on average, your predictions are from the real values.

0 Comments