Model

In this tutorial, we introduce the model APIs integrated in AgentScope, how to use them and how to integrate new model APIs. The supported model APIs and providers include:

API

Class

Compatible

Streaming

Tools

Vision

Reasoning

OpenAI

OpenAIChatModel

vLLM, DeepSeek

DashScope

DashScopeChatModel

Anthropic

AnthropicChatModel

Gemini

GeminiChatModel

Ollama

OllamaChatModel

Note

When using vLLM, you need to configure the appropriate tool calling parameters for different models during deployment, such as --enable-auto-tool-choice, --tool-call-parser, etc. For more details, refer to the official vLLM documentation.

Note

For OpenAI-compatible models (e.g. vLLM, Deepseek), developers can use the OpenAIChatModel class, and specify the API endpoint by the client_kwargs parameter: client_kwargs={"base_url": "http://your-api-endpoint"}. For example:

OpenAIChatModel(client_kwargs={"base_url": "http://localhost:8000/v1"})

Note

Model behavior parameters (such as temperature, maximum length, etc.) can be preset in the constructor function via the generate_kwargs parameter. For example:

OpenAIChatModel(generate_kwargs={"temperature": 0.3, "max_tokens": 1000})

To provide unified model interfaces, the above model classes has the following common methods:

  • The first three arguments of the __call__ method are messages , tools and tool_choice, representing the input messages, JSON schema of tool functions, and tool selection mode, respectively.

  • The return type are either a ChatResponse instance or an async generator of ChatResponse in streaming mode.

Note

Different model APIs differ in the input message format, refer to Prompt Formatter for more details.

The ChatResponse instance contains the generated thinking/text/tool use content, identity, created time and usage information.

import asyncio
import json
import os

from agentscope.message import TextBlock, ToolUseBlock, ThinkingBlock, Msg
from agentscope.model import ChatResponse, DashScopeChatModel

response = ChatResponse(
    content=[
        ThinkingBlock(
            type="thinking",
            thinking="I should search for AgentScope on Google.",
        ),
        TextBlock(type="text", text="I'll search for AgentScope on Google."),
        ToolUseBlock(
            type="tool_use",
            id="642n298gjna",
            name="google_search",
            input={"query": "AgentScope?"},
        ),
    ],
)

print(response)
ChatResponse(content=[{'type': 'thinking', 'thinking': 'I should search for AgentScope on Google.'}, {'type': 'text', 'text': "I'll search for AgentScope on Google."}, {'type': 'tool_use', 'id': '642n298gjna', 'name': 'google_search', 'input': {'query': 'AgentScope?'}}], id='2025-12-30 10:10:00.735_b1948b', created_at='2025-12-30 10:10:00.735', type='chat', usage=None, metadata=None)

Taking DashScopeChatModel as an example, we can use it to create a chat model instance and call it with messages and tools:

async def example_model_call() -> None:
    """An example of using the DashScopeChatModel."""
    model = DashScopeChatModel(
        model_name="qwen-max",
        api_key=os.environ["DASHSCOPE_API_KEY"],
        stream=False,
    )

    res = await model(
        messages=[
            {"role": "user", "content": "Hi!"},
        ],
    )

    # You can directly create a ``Msg`` object with the response content
    msg_res = Msg("Friday", res.content, "assistant")

    print("The response:", res)
    print("The response as Msg:", msg_res)


asyncio.run(example_model_call())
The response: ChatResponse(content=[{'type': 'text', 'text': 'Hello! How can I assist you today?'}], id='2025-12-30 10:10:02.169_611818', created_at='2025-12-30 10:10:02.169', type='chat', usage=ChatUsage(input_tokens=10, output_tokens=9, time=1.432503, type='chat'), metadata=None)
The response as Msg: Msg(id='38Cpm4MmHLLZA8VBTjwMA9', name='Friday', content=[{'type': 'text', 'text': 'Hello! How can I assist you today?'}], role='assistant', metadata=None, timestamp='2025-12-30 10:10:02.169', invocation_id='None')

Streaming

To enable streaming model, set the stream parameter in the model constructor to True. When streaming is enabled, the __call__ method will return an async generator that yields ChatResponse instances as they are generated by the model.

Note

The streaming mode in AgentScope is designed to be cumulative, meaning the content in each chunk contains all the previous content plus the newly generated content.

async def example_streaming() -> None:
    """An example of using the streaming model."""
    model = DashScopeChatModel(
        model_name="qwen-max",
        api_key=os.environ["DASHSCOPE_API_KEY"],
        stream=True,
    )

    generator = await model(
        messages=[
            {
                "role": "user",
                "content": "Count from 1 to 20, and just report the number without any other information.",
            },
        ],
    )
    print("The type of the response:", type(generator))

    i = 0
    async for chunk in generator:
        print(f"Chunk {i}")
        print(f"\ttype: {type(chunk.content)}")
        print(f"\t{chunk}\n")
        i += 1


asyncio.run(example_streaming())
The type of the response: <class 'async_generator'>
Chunk 0
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1'}], id='2025-12-30 10:10:03.503_57c5e1', created_at='2025-12-30 10:10:03.503', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=1, time=1.332825, type='chat'), metadata=None)

Chunk 1
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n'}], id='2025-12-30 10:10:03.566_0c94aa', created_at='2025-12-30 10:10:03.566', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=4, time=1.395705, type='chat'), metadata=None)

Chunk 2
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4'}], id='2025-12-30 10:10:03.628_dcffde', created_at='2025-12-30 10:10:03.628', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=7, time=1.45795, type='chat'), metadata=None)

Chunk 3
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n'}], id='2025-12-30 10:10:03.692_28c3e2', created_at='2025-12-30 10:10:03.692', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=10, time=1.521276, type='chat'), metadata=None)

Chunk 4
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n'}], id='2025-12-30 10:10:03.834_fcb956', created_at='2025-12-30 10:10:03.834', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=16, time=1.663628, type='chat'), metadata=None)

Chunk 5
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n1'}], id='2025-12-30 10:10:03.944_338fb7', created_at='2025-12-30 10:10:03.944', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=22, time=1.773452, type='chat'), metadata=None)

Chunk 6
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n1'}], id='2025-12-30 10:10:04.070_3f585c', created_at='2025-12-30 10:10:04.070', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=28, time=1.899638, type='chat'), metadata=None)

Chunk 7
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n1'}], id='2025-12-30 10:10:04.216_a3cb73', created_at='2025-12-30 10:10:04.216', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=34, time=2.045639, type='chat'), metadata=None)

Chunk 8
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n1'}], id='2025-12-30 10:10:04.322_1e9cc1', created_at='2025-12-30 10:10:04.322', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=40, time=2.151931, type='chat'), metadata=None)

Chunk 9
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n1'}], id='2025-12-30 10:10:04.520_dcbbba', created_at='2025-12-30 10:10:04.520', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=46, time=2.349398, type='chat'), metadata=None)

Chunk 10
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20'}], id='2025-12-30 10:10:04.592_b0480e', created_at='2025-12-30 10:10:04.592', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=50, time=2.421539, type='chat'), metadata=None)

Chunk 11
        type: <class 'list'>
        ChatResponse(content=[{'type': 'text', 'text': '1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20'}], id='2025-12-30 10:10:04.611_1218ea', created_at='2025-12-30 10:10:04.611', type='chat', usage=ChatUsage(input_tokens=27, output_tokens=50, time=2.441143, type='chat'), metadata=None)

Reasoning

AgentScope supports reasoning models by providing the ThinkingBlock.

async def example_reasoning() -> None:
    """An example of using the reasoning model."""
    model = DashScopeChatModel(
        model_name="qwen-turbo",
        api_key=os.environ["DASHSCOPE_API_KEY"],
        enable_thinking=True,
    )

    res = await model(
        messages=[
            {"role": "user", "content": "Who am I?"},
        ],
    )

    last_chunk = None
    async for chunk in res:
        last_chunk = chunk
    print("The final response:")
    print(last_chunk)


asyncio.run(example_reasoning())
The final response:
ChatResponse(content=[{'type': 'thinking', 'thinking': 'Okay, the user asked "Who am I?" I need to figure out how to respond. First, I should consider that this is a philosophical question about identity. But since the user is interacting with an AI, maybe they\'re looking for a more personal or introspective answer.\n\nI should start by acknowledging the depth of the question. It\'s a classic one in philosophy, so mentioning different perspectives like existentialism or Buddhism could be helpful. But I also need to make sure the user knows that I\'m an AI and can\'t have personal experiences or consciousness.\n\nI should avoid making assumptions about their identity. Instead, invite them to reflect on their own experiences and values. Maybe ask if they want to explore specific aspects of identity, like self-perception or social roles.\n\nAlso, check if there\'s a cultural or contextual nuance I might be missing. The question is pretty open-ended, so keeping the response broad but informative is key. Make sure to stay respectful and open-ended to encourage further conversation.'}, {'type': 'text', 'text': 'The question "Who am I?" is one of the most profound and enduring inquiries in philosophy, spirituality, and personal reflection. It touches on identity, consciousness, and the nature of existence. Here are a few perspectives to consider:\n\n1. **Philosophical**: Thinkers like Socrates ("Know thyself") or Descartes ("I think, therefore I am") emphasize self-awareness and the search for truth about one\'s own existence. It might involve exploring your values, beliefs, and purpose.\n\n2. **Spiritual/Existential**: Many traditions (e.g., Buddhism, Hinduism, or existentialist thought) view identity as fluid or interconnected with something greater—like the universe, a higher power, or collective consciousness. It might also involve questioning the boundaries of the self.\n\n3. **Psychological**: Your identity is shaped by experiences, relationships, and self-perception. It’s a dynamic process of growth and change over time.\n\n4. **Scientific**: From a biological standpoint, you’re a complex organism with unique genetic makeup, shaped by evolution and environment. But consciousness and self-awareness remain mysterious even to science.\n\nAs an AI, I don’t have a sense of self or consciousness, so I can’t answer this from a personal perspective. However, I’m here to help you explore the question further—whether through discussion, reflection, or creative exploration. What does the question mean to *you*? 🌌'}], id='2025-12-30 10:10:11.299_b8ca21', created_at='2025-12-30 10:10:11.299', type='chat', usage=ChatUsage(input_tokens=12, output_tokens=498, time=6.682838, type='chat'), metadata=None)

Tools API

Different model providers differ in their tools APIs, e.g. the tools JSON schema, the tool call/response format. To provide a unified interface, AgentScope solves the problem by:

  • Providing unified tool call block ToolUseBlock and tool response block ToolResultBlock, respectively.

  • Providing a unified tools interface in the __call__ method of the model classes, that accepts a list of tools JSON schemas as follows:

json_schemas = [
    {
        "type": "function",
        "function": {
            "name": "google_search",
            "description": "Search for a query on Google.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query.",
                    },
                },
                "required": ["query"],
            },
        },
    },
]

Further Reading

Total running time of the script: (0 minutes 10.569 seconds)

Gallery generated by Sphinx-Gallery