From the course: Build with AI: LLM-Powered Applications with Streamlit
Unlock this course with a free trial
Join today to access over 25,200 courses taught by industry experts.
Guidelines for working with AI and APIs - Python Tutorial
From the course: Build with AI: LLM-Powered Applications with Streamlit
Guidelines for working with AI and APIs
- [Instructor] Before you start working with AI and APIs, you need to put some guardrails in place. I'll make sure to cover how to keep your API key safe, monitor usage to control costs, implement robust error handling, and apply foundational AI ethics. These best practices will keep your Streamlit application secure, reliable, and responsible. When working with APIs, it is common to have what is called an API key. Your API key is like a password. If this key is exposed or shared, others can charge against your account and even cause your account to get shut down. Never paste this key directly into your coding scripts, especially if they will be shared with others or posted to the public such as in a GitHub repository. There are a few different ways to protect your API key, such as storing it in an environment variable or a local file excluded by .gitignore. In Python, you can use os.gitenv with your API key to gather it from the environment, or you can read it from a text file to…
Contents
-
-
-
-
(Locked)
What are large language models (LLMs)?3m 31s
-
(Locked)
What is retrieval-augmented generation (RAG)?3m 21s
-
(Locked)
Guidelines for working with AI and APIs3m 43s
-
(Locked)
How to connect to OpenAI API7m 58s
-
(Locked)
Send user prompts to an LLM and display the response13m 34s
-
(Locked)
Save and display chat history in your application5m 9s
-
(Locked)
-
-
-