Skip to content
Merged
Changes from 1 commit
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
f61cc21
Created litellm client
kevinmessiaen Nov 5, 2024
e3844ff
Updated documentation
kevinmessiaen Nov 5, 2024
c524217
Added litellm embedding
kevinmessiaen Nov 5, 2024
d6b032f
Code improvement
kevinmessiaen Nov 5, 2024
e31cdfa
Added deprecated warnings
kevinmessiaen Nov 5, 2024
0f5ade7
Fixed typo
kevinmessiaen Nov 5, 2024
3045060
Improved documentation and llm setup
kevinmessiaen Nov 7, 2024
f49a2bc
Added back fastembed as default
kevinmessiaen Nov 7, 2024
1157bda
Removed todo: LiteLLM does not support embeddings for Gemini and Ollama
kevinmessiaen Nov 7, 2024
10e4113
Typo
kevinmessiaen Nov 7, 2024
5fb6a78
Fixed embeddings
kevinmessiaen Nov 7, 2024
f897262
Default model to gpt-4o
kevinmessiaen Nov 7, 2024
2657b19
Code cleanup
kevinmessiaen Nov 7, 2024
b04f126
Code cleanup
kevinmessiaen Nov 7, 2024
4633aa4
Skip LiteLLM tests with pydantic < 2
kevinmessiaen Nov 8, 2024
63ace19
Added test for custom client
kevinmessiaen Nov 8, 2024
1b382ee
Added test for embedding
kevinmessiaen Nov 8, 2024
713f0b0
Fixed tests
kevinmessiaen Nov 8, 2024
deca09a
Merge branch 'main' into feature/litellm
henchaves Nov 14, 2024
e54c414
Merge branch 'main' into feature/litellm
henchaves Nov 14, 2024
dee0e83
Reintroduced old way to set LLM models
kevinmessiaen Nov 15, 2024
7703d51
Reintroduced old way to set LLM models
kevinmessiaen Nov 15, 2024
5349fc2
Reintroduced old clients
kevinmessiaen Nov 15, 2024
6458f97
Merge branch 'main' into feature/litellm
kevinmessiaen Nov 15, 2024
2756e27
Fixed OpenAI embeddings
kevinmessiaen Nov 15, 2024
2b88ed3
Update Setting up the LLM client docs
henchaves Nov 15, 2024
1dc73d9
Update Setting up the LLM client docs pt 2
henchaves Nov 15, 2024
b39731e
Update testset generation docs
henchaves Nov 15, 2024
7eaf007
Update scan llm docs
henchaves Nov 15, 2024
5f51327
Merge branch 'main' into feature/litellm
henchaves Nov 18, 2024
39c4fa9
Removed response_format with ollama models due to issue in litellm
kevinmessiaen Nov 19, 2024
b09d266
Added dumb trim
kevinmessiaen Nov 19, 2024
911d6e5
Fixed output
kevinmessiaen Nov 19, 2024
40bede9
Add _parse_json_output to LiteLLM
henchaves Nov 19, 2024
77e6a4f
Added way to disable structured output
kevinmessiaen Nov 20, 2024
cab45a1
Fix test_litellm_client
henchaves Nov 21, 2024
5f39da1
Merge branch 'main' into feature/litellm
henchaves Nov 21, 2024
78dd03e
Check if format is json before calling _parse_json_output
henchaves Nov 21, 2024
82712c7
Set LITELLM_LOG as error level
henchaves Nov 21, 2024
a61e4b2
Add `disable_structured_output` to bedrock examples
henchaves Nov 21, 2024
3d33028
Format files
henchaves Nov 21, 2024
a571312
Fix sonar issues
henchaves Nov 21, 2024
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update scan llm docs
  • Loading branch information
henchaves committed Nov 15, 2024
commit 7eaf007eb8285d46c48930237135ecc0f4ae400f
98 changes: 49 additions & 49 deletions docs/open_source/scan/scan_llm/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,18 +38,13 @@ processed.

## Before starting

In the following example, we illustrate the procedure using **OpenAI** and **Azure OpenAI**; however, please note that
our platform supports a variety of language models. For details on configuring different models, visit
our [🤖 Setting up the LLM Client page](../../setting_up/index.md)

Before starting, make sure you have installed the LLM flavor of Giskard:
First of all, make sure you have installed the LLM flavor of Giskard:

```bash
pip install "giskard[llm]"
```

For the LLM-assisted detectors to work, you need to have an OpenAI API key. You can set it in your notebook
like this:
For the LLM-assisted detectors to work, you need to set up a LLM client. Our platform supports a variety of language models, and you can find the details on configuring different models in our [🤖 Setting up the LLM Client page](../../setting_up/index.md) or follow the instructions below for each provider:

:::::::{tab-set}
::::::{tab-item} OpenAI
Expand All @@ -58,18 +53,18 @@ like this:
import os
import giskard

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_KEY"] = "" # "my-openai-api-key"

# Optional, setup a model (default model is gpt-4)
giskard.llm.set_llm_model("gpt-4")
giskard.llm.set_embedding_model("text-embedding-ada-002")
# Optional, setup a model (default LLM is gpt-4o, default embedding model is text-embedding-3-small)
giskard.llm.set_llm_model("gpt-4o")
giskard.llm.set_embedding_model("text-embedding-3-small")

# Optional Keys - OpenAI Organization, OpenAI API Base
os.environ["OPENAI_ORGANIZATION"] = "your-org-id"
os.environ["OPENAI_API_BASE"] = "openaiai-api-base"
os.environ["OPENAI_ORGANIZATION"] = "" # "my-openai-organization"
os.environ["OPENAI_API_BASE"] = "" # "https://api.openai.com"
```

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/openai)
More information on [OpenAI LiteLLM documentation](https://docs.litellm.ai/docs/providers/openai)

::::::
::::::{tab-item} Azure OpenAI
Expand All @@ -82,16 +77,15 @@ os.environ["AZURE_API_KEY"] = "" # "my-azure-api-key"
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2023-05-15"

giskard.llm.set_llm_model("azure/<your_deployment_name>")
giskard.llm.set_embedding_model("azure/<your_deployment_name>")
giskard.llm.set_llm_model("azure/<your_llm_name>")
giskard.llm.set_embedding_model("azure/<your_embed_model_name>")

# optional
# Optional Keys - Azure AD Token, Azure API Type
os.environ["AZURE_AD_TOKEN"] = ""
os.environ["AZURE_API_TYPE"] = ""
giskard.llm.set_embedding_model('my-embedding-model')
```

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/azure)
More information on [Azure LiteLLM documentation](https://docs.litellm.ai/docs/providers/azure)

::::::
::::::{tab-item} Mistral
Expand All @@ -100,99 +94,105 @@ More information on [LiteLLM documentation](https://docs.litellm.ai/docs/provide
import os
import giskard

os.environ['MISTRAL_API_KEY'] = ""
os.environ["MISTRAL_API_KEY"] = "" # "my-mistral-api-key"

giskard.llm.set_llm_model("mistral/mistral-tiny")
giskard.llm.set_llm_model("mistral/mistral-large-latest")
giskard.llm.set_embedding_model("mistral/mistral-embed")
```

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/mistral)
More information on [Mistral LiteLLM documentation](https://docs.litellm.ai/docs/providers/mistral)

::::::
::::::{tab-item} Ollama

```python
import giskard

giskard.llm.set_llm_model("ollama/llama2", api_base="http://localhost:11434") # See supported models here: https://docs.litellm.ai/docs/providers/ollama#ollama-models
```
api_base = "http://localhost:11434" # default api_base for local Ollama

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/ollama)
# See supported models here: https://docs.litellm.ai/docs/providers/ollama#ollama-models
giskard.llm.set_llm_model("ollama/llama3", api_base=api_base)
giskard.llm.set_embedding_model("ollama/nomic-embed-text", api_base=api_base)
```

More information on [Ollama LiteLLM documentation](https://docs.litellm.ai/docs/providers/ollama)

::::::
::::::{tab-item} AWS Bedrock

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/bedrock)
More information on [Bedrock LiteLLM documentation](https://docs.litellm.ai/docs/providers/bedrock)

```python
import os
import giskard

os.environ["AWS_ACCESS_KEY_ID"] = ""
os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""
os.environ["AWS_ACCESS_KEY_ID"] = "" # "my-aws-access-key"
os.environ["AWS_SECRET_ACCESS_KEY"] = "" # "my-aws-secret-access-key"
os.environ["AWS_REGION_NAME"] = "" # "us-west-2"

giskard.llm.set_llm_model("bedrock/anthropic.claude-3-sonnet-20240229-v1:0")
giskard.llm.set_embedding_model("bedrock/amazon.titan-embed-text-v1")
giskard.llm.set_embedding_model("bedrock/amazon.titan-embed-image-v1")
```

::::::
::::::{tab-item} Gemini

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/gemini)
More information on [Gemini LiteLLM documentation](https://docs.litellm.ai/docs/providers/gemini)

```python
import os
import giskard

os.environ["GEMINI_API_KEY"] = "your-api-key"
os.environ["GEMINI_API_KEY"] = "" # "my-gemini-api-key"

giskard.llm.set_llm_model("gemini/gemini-pro")
giskard.llm.set_embedding_model("gemini/text-embedding-004")
```

::::::
::::::{tab-item} Custom Client

More information on [LiteLLM documentation](https://docs.litellm.ai/docs/providers/custom_llm_server )
More information on [Custom Format LiteLLM documentation](https://docs.litellm.ai/docs/providers/custom_llm_server)

```python
import requests
import giskard
import litellm
import os
import requests
from typing import Optional

import litellm
import giskard


class MyCustomLLM(litellm.CustomLLM):
def completion(self, messages: str, api_key: Optional[str] = None, **kwargs) -> litellm.ModelResponse:
api_key = api_key or os.environ.get('MY_SECRET_KEY')
if api_key is None:
raise litellm.AuthenticationError("Api key is not provided")
def completion(self, messages: str, api_key: Optional[str] = None, **kwargs) -> litellm.ModelResponse:
api_key = api_key or os.environ.get("MY_SECRET_KEY")
if api_key is None:
raise litellm.AuthenticationError("`api_key` was not provided")

response = requests.post('https://www.my-fake-llm.ai/chat/completion', json={
'messages': messages
}, headers={'Authorization': api_key})
response = requests.post(
"https://www.my-custom-llm.ai/chat/completion",
json={"messages": messages},
headers={"Authorization": api_key},
)

return litellm.ModelResponse(**response.json())
return litellm.ModelResponse(**response.json())

os.eviron["MY_SECRET_KEY"] = "" # "my-secret-key"

my_custom_llm = MyCustomLLM()

litellm.custom_provider_map = [ # 👈 KEY STEP - REGISTER HANDLER
{"provider": "my-custom-llm", "custom_handler": my_custom_llm}
{"provider": "my-custom-llm-endpoint", "custom_handler": my_custom_llm}
]

api_key = os.environ['MY_SECRET_KEY']
api_key = os.environ["MY_SECRET_KEY"]

giskard.llm.set_llm_model("my-custom-llm/my-fake-llm-model", api_key=api_key)
giskard.llm.set_llm_model("my-custom-llm-endpoint/my-custom-model", api_key=api_key)
```

::::::
:::::::

We are now ready to start.

(model-wrapping)=

## Step 1: Wrap your model
Expand Down
Loading