From the course: GitHub Copilot Cert Prep by Microsoft Press

Unlock this course with a free trial

Join today to access over 24,500 courses taught by industry experts.

Learn the limitations of generative AI

Learn the limitations of generative AI

- [Instructor] The output quality we get from GitHub Copilot directly depends on its training data. And as I mentioned a couple times, GPT was trained over several years over the world's open-source code repositories. The model was originally called Codex. Now it's just part of the GPT family, generally intelligent. But GitHub fine-tunes those off the shelf GPTs for development scenarios. Limited or skewed data in that training dataset can give inaccurate or biased code. So we're going into this as developers skeptical. I do want you to be skeptical of GitHub Copilot and we want to adopt zero trust principles with AI, just like we do with everything else in it. You may be skeptical of hallucinations, for example, in GitHub Copilot, the AI might produce code that's syntactically correct, but it's logically flawed or semantically off-track. That's to be expected. So therefore, human in the loop. Developer review, post-gen testing are critical to ensure functionality. I've been working…

Contents