Skip to content

Commit ff8d141

Browse files
author
Kunjan Shah
committed
docs: Add note about Groq embedding support
1 parent 7c86562 commit ff8d141

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

‎docs/open_source/setting_up/index.md‎

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,9 @@ This guide focuses primarily on configuring and using various LLM clients suppor
77

88
More information on [Groq LiteLLM documentation](https://docs.litellm.ai/docs/providers/groq)
99

10+
**Note: Groq does not currently support embedding models.**
11+
For a complete list of supported embedding providers, see: [LiteLLM Embedding Documentation](https://docs.litellm.ai/docs/embedding/supported_embedding)
12+
1013
### Setup using .env variables
1114

1215
```python
@@ -17,9 +20,6 @@ os.environ["GROQ_API_KEY"] = "" # "my-groq-api-key"
1720

1821
# Optional, setup a model (default LLM is llama-3.3-70b-versatile)
1922
giskard.llm.set_llm_model("groq/llama-3.3-70b-versatile")
20-
21-
# Note: Groq does not currently support embedding models
22-
# Use another provider for embeddings if needed
2323
```
2424

2525
### Setup using completion params

0 commit comments

Comments
 (0)