Skip to content

LiteLLM fails to connect to OpenAI-compatible backend (DMR) — conflicting behavior between model prefix and provider detection��#169

@Gharlyk

Description

@Gharlyk

Description

When configuring Bytebot’s LiteLLM proxy to communicate with a Docker Model Runner (DMR) providing an OpenAI-compatible API, the proxy fails to register a healthy deployment or returns NotFoundError / AuthenticationError depending on configuration.

Despite DMR responding correctly to requests, LiteLLM cannot successfully bridge requests through litellm-config.yaml.

my litellm-config.yaml :
model_list:

with this probe : docker logs -f bytebot-bytebot-llm-proxy-1

As soon as connect to bytebot at localhost:9992 I get this:
LiteLLM: Proxy initialized with Config, Set models: local-qwen3 INFO: 172.18.0.5:56186 - "GET /model/info HTTP/1.1" 200 OK

I see local-qwen3 from the drop-down list. But as soon as i request a task in the interface like "open firefox" then I get a task error:
13:03:23 - LiteLLM Proxy:ERROR: common_request_processing.py:699 - litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - litellm.NotFoundError: NotFoundError: OpenAIException - not found. Received Model Group=openai/ai/qwen3-vl:latest Available Model Group Fallbacks=None

The error means the LiteLLM proxy can reach the OpenAI-compatible API but the model name it’s asking for doesn’t exist on the DMR side due to the openai/ai/qwen3-vl:latest.

Am I the only one having this issue by trying to interface liteLLM with Docker Model Runner ?

Keep in mind, I am a noob in Ai topics, so helped by chatGPT to conclude on this :)

Many thanks for your answers.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions