-
Notifications
You must be signed in to change notification settings - Fork 20.3k
Description
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Example Code (Python)
from pydantic import BaseModel, Field
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool
class WeatherBaseModel(BaseModel):
"""Weather response."""
temperature: float = Field(description="The temperature in fahrenheit")
condition: str = Field(description="Weather condition")
model = init_chat_model(
model="gemini-3-flash-preview",
model_provider="google_genai"
)
@tool
def get_weather(city: str) -> str:
"""Get the weather for a city."""
return f"The weather in {city} is sunny and 75°F."
agent = create_agent(
model,
system_prompt=(
"You are a helpful weather assistant. Please call the get_weather tool "
"once, then use the WeatherReport tool to generate the final response."
),
tools=[get_weather],
response_format=WeatherBaseModel,
)
response = agent.invoke({"messages": [
{"role": "user", "content": "What's the weather in Boston?"}
]})Error Message and Stack Trace (if applicable)
{'messages': [HumanMessage(content="What's the weather in Boston?", additional_kwargs={}, response_metadata={}, id='56e1980d-2c82-4c87-9cae-5b7cc21fa3b0'),
AIMessage(content=[], additional_kwargs={'function_call': {'name': 'get_weather', 'arguments': '{"city": "Boston"}'}, '__gemini_function_call_thought_signatures__': {'2ef57987-1a7c': 'Ep0BCpoBAXLI2nzBleFy/WyG7q1KI3oy4GdqcFsvz7UjZkr0B/'}}, response_metadata={'finish_reason': 'STOP', 'model_name': 'gemini-3-flash-preview', 'safety_ratings': [], 'model_provider': 'google_genai'}, id='lc_run--019b4be1-f0fd-7a80-', tool_calls=[{'name': 'get_weather', 'args': {'city': 'Boston'}, 'id': '2ef57987-1a7c-4f72', 'type': 'tool_call'}], usage_metadata={'input_tokens': 150, 'output_tokens': 43, 'total_tokens': 193, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 27}}),
ToolMessage(content='The weather in Boston is sunny and 75°F.', name='get_weather', id='430f0bc2-d1a5-4601', tool_call_id='2ef57987-1a7c-4f72'),
AIMessage(content=[], additional_kwargs={'function_call': {'name': 'WeatherBaseModel', 'arguments': '{"temperature": 75, "condition": "sunny"}'}, '__gemini_function_call_thought_signatures__': {'3eb5b1cf-0d0a-4033-a02f': 'EpAyCo0yAXLI2nybwYSeDGK0rqlTaHBpaVUzHVtRPJseIWLuIqQP4rohYr'}}, response_metadata={'finish_reason': 'STOP', 'model_name': 'gemini-3-flash-preview', 'safety_ratings': [], 'model_provider': 'google_genai'}, id='lc_run--019b4be1-f477-7250-a8f9-9dad0cb0c764-0', tool_calls=[{'name': 'WeatherBaseModel', 'args': {'temperature': 75, 'condition': 'sunny'}, 'id': '3eb5b1cf-0d0a-4033-a02f', 'type': 'tool_call'}], usage_metadata={'input_tokens': 219, 'output_tokens': 1625, 'total_tokens': 1844, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 1604}}),
ToolMessage(content="Returning structured response: temperature=75.0 condition='sunny'", name='WeatherBaseModel', id='b4dd4592-9bdd-4e9f-a34c', tool_call_id='3eb5b1cf-0d0a-4033-a02f')],
'structured_response': WeatherBaseModel(temperature=75.0, condition='sunny')}Description
Description
Supplying a raw schema (e.g., a Pydantic model) to create_agent(..., response_format=MyModel) uses AutoStrategy.
For Gemini 3 chat models (gemini-3-flash-preview, gemini-3-pro-preview), the AutoStrategy check still returns False whenever tools is non-empty. As a result, the agent is forced into ToolStrategy, even though Gemini 3 now supports provider-native structured output with tools (link).
Manually wrapping the schema in ProviderStrategy(MyModel) works, but users shouldn’t need that workaround.
Expected
AutoStrategy should recognize Gemini 3 models as supporting structured output with tools, while pre–Gemini 3 variants and aliases (gemini-flash-latest, gemini-flash-lite-latest) remain blocked until they point to a 3-series backend.
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:23 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T8122
Python Version: 3.12.7 (main, Oct 16 2024, 07:12:08) [Clang 18.1.8 ]
Package Information
langchain_core: 1.2.5
langchain: 1.2.0
langchain_community: 0.4.1
langsmith: 0.5.0
langchain_anthropic: 1.3.0
langchain_classic: 1.0.0
langchain_google_genai: 4.1.2
langchain_mcp_adapters: 0.2.1
langchain_openai: 1.1.6
langchain_text_splitters: 1.1.0
langgraph_sdk: 0.3.1
Optional packages not installed
langserve
Other Dependencies
aiohttp: 3.13.2
anthropic: 0.75.0
dataclasses-json: 0.6.7
filetype: 1.2.0
google-genai: 1.56.0
httpx: 0.28.1
httpx-sse: 0.4.3
jsonpatch: 1.33
langgraph: 1.0.5
mcp: 1.25.0
numpy: 2.4.0
openai: 2.14.0
opentelemetry-api: 1.39.1
opentelemetry-exporter-otlp-proto-http: 1.39.1
opentelemetry-sdk: 1.39.1
orjson: 3.11.5
packaging: 25.0
pydantic: 2.12.5
pydantic-settings: 2.12.0
pytest: 9.0.2
pyyaml: 6.0.3
PyYAML: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
sqlalchemy: 2.0.45
SQLAlchemy: 2.0.45
tenacity: 9.1.2
tiktoken: 0.12.0
typing-extensions: 4.15.0
uuid-utils: 0.12.0
zstandard: 0.25.0