Skip to content

Conversation

@Light2Dark
Copy link
Contributor

@Light2Dark Light2Dark commented Jan 12, 2026

📝 Summary

Supports returning pydantic_ai types / dicts, async or sync chat models.

# Supported yield types
yield {"type": text-start", id: "1"}
yield pydantic_ai.ui.vercel.response_types.TextStart(id="1")
image

Parking this approach here for discussion.

🔍 Description of Changes

📋 Checklist

  • I have read the contributor guidelines.
  • For large changes, or changes that affect the public API: this change was discussed or approved through an issue, on Discord, or the community discussions (Please provide a link if applicable).
  • Tests have been added for the changes made.
  • Documentation has been updated where applicable, including docstrings for API changes.
  • Pull request title is a good summary of the changes - it will be used in the release notes.
@vercel
Copy link

vercel bot commented Jan 12, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
marimo-docs Ready Ready Preview, Comment Jan 15, 2026 4:12pm

Review with Vercel Agent

@Light2Dark Light2Dark changed the title support custom models returning vercel chunks, add vercel_messages parameter Jan 13, 2026
@Light2Dark Light2Dark added the enhancement New feature or request label Jan 13, 2026
@Light2Dark Light2Dark changed the title support custom models returning vercel chunks, add vercel_streaming parameter Jan 13, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds support for custom chat models to return Vercel AI SDK chunks, enabling rich streaming responses with reasoning, tool calls, and other advanced features. The implementation automatically detects and handles both Vercel AI SDK chunks (from pydantic-ai) and plain text strings, supporting both synchronous and asynchronous generators.

Changes:

  • Removed the concept of "frontend-managed" streaming in favor of a unified approach where all streaming is handled through Vercel AI SDK chunks
  • Added ChuckSerializer (typo - should be ChunkSerializer) class to automatically convert plain strings to Vercel AI SDK format or pass through dict/pydantic chunks as-is
  • Updated frontend to always use streaming protocol, removing conditional logic based on frontendManaged flag

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
marimo/_plugins/ui/_impl/chat/chat.py Core implementation: removes frontend-managed flag, adds ChuckSerializer for automatic chunk conversion, updates streaming response handling
tests/_plugins/ui/_impl/chat/test_chat_delta_streaming.py Updates tests to verify Vercel AI SDK chunk format instead of accumulated text
tests/_plugins/ui/_impl/chat/test_chat.py Adds comprehensive tests for dict chunks, pydantic chunks, and ChuckSerializer
frontend/src/plugins/impl/chat/chat-ui.tsx Simplifies streaming logic by removing frontend-managed conditional branches, always uses streaming protocol
frontend/src/plugins/impl/chat/ChatPlugin.tsx Updates return type from `string
marimo/_ai/llm/_impl.py Adds helper to remove None values from dicts for proper serialization
marimo/_ai/_types.py Sanitizes parts by removing None values before validation
marimo/_smoke_tests/ai/chat-variations.py New smoke test file demonstrating various chat model patterns
examples/ai/chat/pydantic-ai-chat.py Adds custom model example showing Vercel AI SDK chunk usage

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +216 to 218
const strippedError = (error as Error).message
?.split("failed with exception ")
.pop();
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message handling has a potential issue. If error.message is undefined, the optional chaining ?.split() will fail on .pop(). This should use conditional logic or provide a fallback error message.

Suggested change
const strippedError = (error as Error).message
?.split("failed with exception ")
.pop();
const errorMessage =
(error instanceof Error && typeof error.message === "string"
? error.message
: String(error || "Unknown error"));
const strippedError = errorMessage.includes("failed with exception ")
? errorMessage.split("failed with exception ").pop() ?? errorMessage
: errorMessage;
Copilot uses AI. Check for mistakes.
@Light2Dark Light2Dark changed the title support custom models returning vercel chunks by adding a vercel_streaming parameter Jan 15, 2026
Light2Dark and others added 4 commits January 15, 2026 23:56
[simplify mo.ui.chat
model](edc5d05)

this treats all streams types (string or vercel objects or dictionaries)
the same so its opaque to the frontend. everything is valid ai-sdk
stream protocol
@Light2Dark Light2Dark force-pushed the sham/support-custom-models-returning-vercel-chunks branch from 7467243 to 0dae236 Compare January 15, 2026 15:56
@mscolnick mscolnick merged commit 64d35f1 into main Jan 15, 2026
35 of 45 checks passed
@mscolnick mscolnick deleted the sham/support-custom-models-returning-vercel-chunks branch January 15, 2026 16:28
@github-actions
Copy link

🚀 Development release published. You may be able to view the changes at https://marimo.app?v=0.19.3-dev45

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

3 participants