From the course: GitHub Copilot Cert Prep by Microsoft Press
Unlock this course with a free trial
Join today to access over 24,500 courses taught by industry experts.
Learn the limitations of generative AI - Github Copilot Tutorial
From the course: GitHub Copilot Cert Prep by Microsoft Press
Learn the limitations of generative AI
- [Instructor] The output quality we get from GitHub Copilot directly depends on its training data. And as I mentioned a couple times, GPT was trained over several years over the world's open-source code repositories. The model was originally called Codex. Now it's just part of the GPT family, generally intelligent. But GitHub fine-tunes those off the shelf GPTs for development scenarios. Limited or skewed data in that training dataset can give inaccurate or biased code. So we're going into this as developers skeptical. I do want you to be skeptical of GitHub Copilot and we want to adopt zero trust principles with AI, just like we do with everything else in it. You may be skeptical of hallucinations, for example, in GitHub Copilot, the AI might produce code that's syntactically correct, but it's logically flawed or semantically off-track. That's to be expected. So therefore, human in the loop. Developer review, post-gen testing are critical to ensure functionality. I've been working…
Contents
-
-
-
(Locked)
Learning objectives50s
-
(Locked)
Learn the risks associated with using AI tools in software development1m 43s
-
(Locked)
Learn the limitations of generative AI2m 27s
-
(Locked)
Discover why validating AI-generated code is essential for quality and security1m 5s
-
(Locked)
Identify best practices for responsibly operating AI tools1m 41s
-
(Locked)
Mitigate potential harms, such as bias and insecure code1m 7s
-
(Locked)
Define ethical AI principles and how they apply to GitHub Copilot9m 5s
-
(Locked)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-