From the course: Introduction to Auditing AI Systems

AI audit limitations and opportunities

From the course: Introduction to Auditing AI Systems

AI audit limitations and opportunities

- [Narrator] While AI audits provide a way to improve the outcomes of AI systems, there are various limitations that make it clear AI audits are not a silver bullet for responsible AI. There are parts of AI that can be biased but get ignored by a typical audit. Some applications of AI are nuanced and can't be improved by purely technical means. For example, in the United States, studies show policing data is flawed due to patterns of policing in minority communities. So, AI systems that are trained on historical criminal records can have parity between groups, but raise serious ethical questions. In another example, generative AI tools have caused various worker strikes including the Writer's Guild of America and SAG-AFTRA, some of Hollywood's largest unions. The adoption of generative AI raises many questions about worker rights, universal basic income, and copyright, but an AI audit can often neglect societal contexts. Another limitation is the lack of oversight for auditors. Currently, there are no formal qualifications or certifications required for AI auditors, leading to a lack of consistency in audit results. Lacking standards for audits can result in poor-quality assessments, pressures to give systems a passing grade, and inexperienced auditors certifying AI tools. Additionally, demographic data is a critical component for comparing outcomes across protected classes. Unfortunately, prior data privacy laws require companies to avoid collecting or eventually destroy demographic information, so auditors are in the vulnerable position to collect this data. This is a great example why regulation on AI and data privacy should be developed side by side. The process of being audited can often be a stressful one, leading companies to manipulate or modify their systems to seem more favorable than they really are. Making system changes under scrutiny is crucial to avoid. These preventative changes can hinder the effectiveness of an AI audit and lead to inaccurate results. In addition, companies may not always implement recommendations that aren't seen as severe. Despite these limitations, AI audits offer several opportunities, including the ability to avoid AI incidents by catching disparities before deployment. Ideally, companies will conduct internal audits in addition to monitoring AI performance and conduct routine audits. Regaining public trust is a concern for many companies, as fears around job replacement and contestability are important to the general public. With hype and attention at an all time high, consumers are skeptical about trusting organizations with their data. Being proactive rather than reactive about sharing model assumptions can make organizations leaders in responsible AI. AI audits also provide an opportunity to improve AI systems continually. By identifying areas for improvement through the audit process, companies can retrain and update their models to ensure they make preferable decisions consistently, which should also help models remain accurate and generalizable. Keep in mind, AI audits have both limitations and opportunities that must be carefully considered. While they can provide valuable insights into the fairness and performance of a system, auditors should also consider the broader ethical implications in addition to compliance.

Contents