From the course: Machine Learning with SageMaker by Pearson
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Model evaluation metrics: Accuracy, precision, and recall - Amazon SageMaker Tutorial
From the course: Machine Learning with SageMaker by Pearson
Model evaluation metrics: Accuracy, precision, and recall
After we have created our model, we do need to ensure that it meets the performance requirements of whatever that project happens to be. So there are several metrics that we can monitor in order to determine if it is, again, meeting those performance requirements of whatever the project is, and it helps you identify by overfitting and underfitting and provide some insights into the model of how well it is performing with new data. So generalizing unseen data is an important feature of our model to monitor. So the key metrics for classification models, we have your accuracy, precision, recall, and F1 score. So accuracy is the overall correctness. And that is true positive plus true negative divided by the total. So we have this concept called a confusion matrix. And whenever you create a machine learning model, you feed it some data with a known outcome, meaning that if I ask you to predict X, and I think it should be labeled zero, and you return back to me that it is in fact predicted…
Contents
-
-
-
-
-
-
-
(Locked)
Learning objectives39s
-
(Locked)
Model evaluation metrics: Accuracy, precision, and recall9m 39s
-
(Locked)
Using SageMaker Clarify for bias detection and interpretability7m 40s
-
(Locked)
Comparing model performance using A/B testing5m 47s
-
(Locked)
Model A/B testing demonstration6m 26s
-
(Locked)
Managing model versions with SageMaker Model Registry5m 55s
-
(Locked)
Model registry demonstration10m 48s
-
(Locked)
-
-
-
-
-