From the course: Implementing a Data Strategy for Responsible AI
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Societal impacts of foundation models
From the course: Implementing a Data Strategy for Responsible AI
Societal impacts of foundation models
- [Instructor] Now that you've built this well-functioning task-specific pre-trained model, be aware of five common issues that still plague generative AI tools. One, an adequate content filtering. Two, incomplete detection of manipulated content. Three, lack of adversarial robustness. Four, limited explainability and accountability, and five unsatisfactory privacy safeguards. Now, not all output produced by generative AI models is real, it's realistic though. Misinformation, hate speech, and other harmful content can be mixed in with the relevant content. Using advanced adaptive filtering along with human and automated content moderation algorithms could help address the inadequate content filtering issue. Inadequate content filtering became a problem, because manipulated content wasn't sufficiently detected before the output sequence ran. It would've been better to use content verification tools during the model building phase. So you want to minimize the spread of misinformation…