From the course: Responsible GitHub Copilot: Creating Reliable Code Ethically

Next steps

- In this course, Responsible AI with GitHub Copilot. We talked about being aware of what responsible usage of GitHub Copilot means. We talked about things like, Hey, you are the pilot. You are completely in control of what kind of questions you ask GitHub Copilot, what direction you're going into, and you control what you do with the results it gives you. I hope you're not blindly accepting whatever is in there because we also talked about data freshness and that you need to take up a responsibility of making sure that your code and the methods that you use are still up to date. We also talked about the model prejudices for example. Hey, be aware of all the biases that are in the generated model and think about how you can best mitigate those things. All this boils down to the development best practices. Be aware of the GitHub Copilot. The models are helping you along in your coding, but you must validate everything that it produces, just like you would do with a normal coworker, add some unit testing there to make sure your code still does what it you think it would do, add regression testing and security on testing on top of that as well, we're now constantly generating more code using generative AI, so it becomes our responsibility to get more out of that as well, and making sure that we're not blindly trusting whatever it gives us for a suggestion, you need to validate that this stuff actually works, works correctly, fits into your requirements, and all the boundaries that you have there, and take some pride and ownership of the code that you're producing. Copilot is only there to help you produce that code faster, and we're only touching the tip of the iceberg of what generative AI can do. If you have extra questions or thoughts that you want to share, then please reach out to me through LinkedIn. I'm happy to help you further. I'll see you around, until next time.

Contents