-
Notifications
You must be signed in to change notification settings - Fork 892
support custom models returning vercel chunks #7797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support custom models returning vercel chunks #7797
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
vercel_messages parametervercel_streaming parameterThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds support for custom chat models to return Vercel AI SDK chunks, enabling rich streaming responses with reasoning, tool calls, and other advanced features. The implementation automatically detects and handles both Vercel AI SDK chunks (from pydantic-ai) and plain text strings, supporting both synchronous and asynchronous generators.
Changes:
- Removed the concept of "frontend-managed" streaming in favor of a unified approach where all streaming is handled through Vercel AI SDK chunks
- Added
ChuckSerializer(typo - should beChunkSerializer) class to automatically convert plain strings to Vercel AI SDK format or pass through dict/pydantic chunks as-is - Updated frontend to always use streaming protocol, removing conditional logic based on
frontendManagedflag
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| marimo/_plugins/ui/_impl/chat/chat.py | Core implementation: removes frontend-managed flag, adds ChuckSerializer for automatic chunk conversion, updates streaming response handling |
| tests/_plugins/ui/_impl/chat/test_chat_delta_streaming.py | Updates tests to verify Vercel AI SDK chunk format instead of accumulated text |
| tests/_plugins/ui/_impl/chat/test_chat.py | Adds comprehensive tests for dict chunks, pydantic chunks, and ChuckSerializer |
| frontend/src/plugins/impl/chat/chat-ui.tsx | Simplifies streaming logic by removing frontend-managed conditional branches, always uses streaming protocol |
| frontend/src/plugins/impl/chat/ChatPlugin.tsx | Updates return type from `string |
| marimo/_ai/llm/_impl.py | Adds helper to remove None values from dicts for proper serialization |
| marimo/_ai/_types.py | Sanitizes parts by removing None values before validation |
| marimo/_smoke_tests/ai/chat-variations.py | New smoke test file demonstrating various chat model patterns |
| examples/ai/chat/pydantic-ai-chat.py | Adds custom model example showing Vercel AI SDK chunk usage |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const strippedError = (error as Error).message | ||
| ?.split("failed with exception ") | ||
| .pop(); |
Copilot
AI
Jan 15, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message handling has a potential issue. If error.message is undefined, the optional chaining ?.split() will fail on .pop(). This should use conditional logic or provide a fallback error message.
| const strippedError = (error as Error).message | |
| ?.split("failed with exception ") | |
| .pop(); | |
| const errorMessage = | |
| (error instanceof Error && typeof error.message === "string" | |
| ? error.message | |
| : String(error || "Unknown error")); | |
| const strippedError = errorMessage.includes("failed with exception ") | |
| ? errorMessage.split("failed with exception ").pop() ?? errorMessage | |
| : errorMessage; |
vercel_streaming parameter[simplify mo.ui.chat model](edc5d05) this treats all streams types (string or vercel objects or dictionaries) the same so its opaque to the frontend. everything is valid ai-sdk stream protocol
7467243 to
0dae236
Compare
|
🚀 Development release published. You may be able to view the changes at https://marimo.app?v=0.19.3-dev45 |
📝 Summary
Supports returning pydantic_ai types / dicts, async or sync chat models.
Parking this approach here for discussion.
🔍 Description of Changes
📋 Checklist