|
2 | 2 |
|
3 | 3 | This guide focuses primarily on configuring and using various LLM clients supported to run Giskard's LLM-assisted functionalities. We are using [LiteLLM](https://github.com/BerriAI/litellm) to handle the model calls, you can see the list of supported models in the [LiteLLM documentation](https://docs.litellm.ai/docs/providers). |
4 | 4 |
|
5 | | - |
6 | | -## Groq Client Setup |
7 | | - |
8 | | -More information on [Groq LiteLLM documentation](https://docs.litellm.ai/docs/providers/groq) |
9 | | - |
10 | | -**Note: Groq does not currently support embedding models.** |
11 | | -For a complete list of supported embedding providers, see: [LiteLLM Embedding Documentation](https://docs.litellm.ai/docs/embedding/supported_embedding) |
12 | | - |
13 | | -### Setup using .env variables |
14 | | - |
15 | | -```python |
16 | | -import os |
17 | | -import giskard |
18 | | - |
19 | | -os.environ["GROQ_API_KEY"] = "" # "my-groq-api-key" |
20 | | - |
21 | | -# Optional, setup a model (default LLM is llama-3.3-70b-versatile) |
22 | | -giskard.llm.set_llm_model("groq/llama-3.3-70b-versatile") |
23 | | -``` |
24 | | - |
25 | | -### Setup using completion params |
26 | | - |
27 | | -```python |
28 | | -import giskard |
29 | | - |
30 | | -api_key = "" # "my-groq-api-key" |
31 | | -giskard.llm.set_llm_model("groq/llama-3.3-70b-versatile", api_key=api_key) |
32 | | -``` |
33 | | - |
34 | 5 | ## OpenAI Client Setup |
35 | 6 |
|
36 | 7 | More information on [OpenAI LiteLLM documentation](https://docs.litellm.ai/docs/providers/openai) |
@@ -178,6 +149,34 @@ giskard.llm.set_llm_model("gemini/gemini-1.5-pro") |
178 | 149 | giskard.llm.set_embedding_model("gemini/text-embedding-004") |
179 | 150 | ``` |
180 | 151 |
|
| 152 | +## Groq Client Setup |
| 153 | + |
| 154 | +More information on [Groq LiteLLM documentation](https://docs.litellm.ai/docs/providers/groq) |
| 155 | + |
| 156 | +**Note: Groq does not currently support embedding models.** |
| 157 | +For a complete list of supported embedding providers, see: [LiteLLM Embedding Documentation](https://docs.litellm.ai/docs/embedding/supported_embedding) |
| 158 | + |
| 159 | +### Setup using .env variables |
| 160 | + |
| 161 | +```python |
| 162 | +import os |
| 163 | +import giskard |
| 164 | + |
| 165 | +os.environ["GROQ_API_KEY"] = "" # "my-groq-api-key" |
| 166 | + |
| 167 | +# Optional, setup a model (default LLM is llama-3.3-70b-versatile) |
| 168 | +giskard.llm.set_llm_model("groq/llama-3.3-70b-versatile") |
| 169 | +``` |
| 170 | + |
| 171 | +### Setup using completion params |
| 172 | + |
| 173 | +```python |
| 174 | +import giskard |
| 175 | + |
| 176 | +api_key = "" # "my-groq-api-key" |
| 177 | +giskard.llm.set_llm_model("groq/llama-3.3-70b-versatile", api_key=api_key) |
| 178 | +``` |
| 179 | + |
181 | 180 | ## Custom Client Setup |
182 | 181 |
|
183 | 182 | More information on [Custom Format LiteLLM documentation](https://docs.litellm.ai/docs/providers/custom_llm_server) |
|
0 commit comments