The VLM Run Python SDK is the official Python SDK for VLM Run API platform, providing a convenient way to interact with our REST APIs.
pip install vlmrunThe package provides optional features that can be installed based on your needs:
-
Video processing features (numpy, opencv-python):
pip install "vlmrun[video]" -
Document processing features (pypdfium2):
pip install "vlmrun[doc]" -
OpenAI SDK integration (for chat completions API):
pip install "vlmrun[openai]" -
All optional features:
pip install "vlmrun[all]"
from PIL import Image
from vlmrun.client import VLMRun
from vlmrun.common.utils import remote_image
# Initialize the client
client = VLMRun(api_key="<your-api-key>")
# Process an image using local file or remote URL
image: Image.Image = remote_image("https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice/invoice_1.jpg")
response = client.image.generate(
images=[image],
domain="document.invoice"
)
print(response)
# Or process an image directly from URL
response = client.image.generate(
urls=["https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice/invoice_1.jpg"],
domain="document.invoice"
)
print(response)The VLM Run SDK provides OpenAI-compatible chat completions through the agent endpoint. This allows you to use the familiar OpenAI API with VLM Run's powerful vision-language models.
from vlmrun import VLMRun
client = VLMRun(
api_key="your-key",
base_url="https://agent.vlm.run/v1"
)
response = client.agent.completions.create(
model="vlmrun-orion-1",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)For async support:
import asyncio
from vlmrun import VLMRun
client = VLMRun(api_key="your-key", base_url="https://agent.vlm.run/v1")
async def main():
response = await client.agent.async_completions.create(
model="vlmrun-orion-1",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())Installation: Install with OpenAI support using pip install vlmrun[openai]
- π¬ Need help? Email us at support@vlm.run or join our Discord
- π Check out our Documentation
- π£ Follow us on Twitter and LinkedIn