From the course: Implementing a Data Strategy for Responsible AI
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Model monitoring
From the course: Implementing a Data Strategy for Responsible AI
Model monitoring
- [Instructor] You concentrate on model monitoring when you're closely tracking performance after your model has gone into production. Ideally, you want to start model monitoring immediately after you deploy a model to production. Using reference data windows helps you establish baseline for comparison and track changes within specific time periods. You can then select certain features to monitor as a way to reduce computational costs. You likely know and do these activities, but what's hazy is which evaluation and performance metrics are most important? There are over 100 tools and metrics available that could help you respect human rights, and that are fair, transparent, explainable, robust, secure, and safe. Let's share five major evaluation metrics to prioritize for your experiments, courtesy of Microsoft Azure. First, there's groundedness, which evaluates how well the models generated answers align with information from the input source. Second is relevance, which evaluates the…