Skip to content

Commit 92cb0a2

Browse files
authored
chore (ai): rename CoreMessage to ModelMessage (#6105)
## Background The name `CoreMessage` was an artifact of needed to introduce a separate message type. Over time it became clear that we have threee distinct message types: - messages used on the client that are displayed to the user (`UIMessage`) - messages that are sent to our user-facing APIs (currently `CoreMessage`) - messages that are sent to the providers (`LanguageModelV2Message`) They all have distinct DX considerations and are different by design. The name `CoreMessage` is not clear here. ## Summary Rename `CoreMessage` to `ModelMessage` to make the distinction `UIMessage <> ModelMessage` clear (most users are not exposed to internal provider messages). ## Future Work `CoreMessages` and related types and functions are deprecated and will be removed in AI SDK 6
1 parent c9ad635 commit 92cb0a2

File tree

71 files changed

+316
-230
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+316
-230
lines changed

‎.changeset/young-dingos-march.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': major
3+
---
4+
5+
chore (ai): rename CoreMessage to ModelMessage

‎content/cookbook/01-next/11-generate-text-with-chat-prompt.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,12 @@ Let's start by creating a simple chat interface with an input field that sends t
3131
```tsx filename='app/page.tsx'
3232
'use client';
3333

34-
import { CoreMessage } from 'ai';
34+
import { ModelMessage } from 'ai';
3535
import { useState } from 'react';
3636

3737
export default function Page() {
3838
const [input, setInput] = useState('');
39-
const [messages, setMessages] = useState<CoreMessage[]>([]);
39+
const [messages, setMessages] = useState<ModelMessage[]>([]);
4040

4141
return (
4242
<div>
@@ -90,11 +90,11 @@ export default function Page() {
9090
Next, let's create the `/api/chat` endpoint that generates the assistant's response based on the conversation history.
9191

9292
```typescript filename='app/api/chat/route.ts'
93-
import { CoreMessage, generateText } from 'ai';
93+
import { ModelMessage, generateText } from 'ai';
9494
import { openai } from '@ai-sdk/openai';
9595

9696
export async function POST(req: Request) {
97-
const { messages }: { messages: CoreMessage[] } = await req.json();
97+
const { messages }: { messages: ModelMessage[] } = await req.json();
9898

9999
const { response } = await generateText({
100100
model: openai('gpt-4'),

‎content/cookbook/01-next/24-stream-text-multistep.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ export async function POST(req: Request) {
5555
'You are a helpful assistant with a different system prompt. Repeat the extract user goal in your answer.',
5656
// continue the workflow stream with the messages from the previous step:
5757
messages: [
58-
...convertToCoreMessages(messages),
58+
...convertToModelMessages(messages),
5959
...(await result1.response).messages,
6060
],
6161
});

‎content/cookbook/01-next/75-human-in-the-loop.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -336,7 +336,7 @@ The solution above is low-level and not very friendly to use in a production env
336336
import {
337337
formatDataStreamPart,
338338
Message,
339-
convertToCoreMessages,
339+
convertToModelMessages,
340340
DataStreamWriter,
341341
ToolExecutionOptions,
342342
ToolSet,
@@ -419,7 +419,7 @@ export async function processToolCalls<
419419
const toolInstance = executeFunctions[toolName];
420420
if (toolInstance) {
421421
result = await toolInstance(toolInvocation.args, {
422-
messages: convertToCoreMessages(messages),
422+
messages: convertToModelMessages(messages),
423423
toolCallId: toolInvocation.toolCallId,
424424
});
425425
} else {

‎content/docs/02-foundations/03-prompts.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -164,9 +164,9 @@ const messages = [
164164
AI SDK UI hooks like [`useChat`](/docs/reference/ai-sdk-ui/use-chat) return
165165
arrays of `UIMessage` objects, which do not support provider options. We
166166
recommend using the
167-
[`convertToCoreMessages`](/docs/reference/ai-sdk-ui/convert-to-core-messages)
167+
[`convertToModelMessages`](/docs/reference/ai-sdk-ui/convert-to-core-messages)
168168
function to convert `UIMessage` objects to
169-
[`CoreMessage`](/docs/reference/ai-sdk-core/core-message) objects before
169+
[`ModelMessage`](/docs/reference/ai-sdk-core/core-message) objects before
170170
applying or appending message(s) or message parts with `providerOptions`.
171171
</Note>
172172

‎content/docs/02-getting-started/06-nodejs.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Create an `index.ts` file in the root of your project and add the following code
7676

7777
```ts filename="index.ts"
7878
import { openai } from '@ai-sdk/openai';
79-
import { CoreMessage, streamText } from 'ai';
79+
import { ModelMessage, streamText } from 'ai';
8080
import dotenv from 'dotenv';
8181
import * as readline from 'node:readline/promises';
8282

@@ -87,7 +87,7 @@ const terminal = readline.createInterface({
8787
output: process.stdout,
8888
});
8989

90-
const messages: CoreMessage[] = [];
90+
const messages: ModelMessage[] = [];
9191

9292
async function main() {
9393
while (true) {
@@ -151,7 +151,7 @@ Modify your `index.ts` file to include the new weather tool:
151151

152152
```ts filename="index.ts" highlight="2,4,25-38"
153153
import { openai } from '@ai-sdk/openai';
154-
import { CoreMessage, streamText, tool } from 'ai';
154+
import { ModelMessage, streamText, tool } from 'ai';
155155
import dotenv from 'dotenv';
156156
import { z } from 'zod';
157157
import * as readline from 'node:readline/promises';
@@ -163,7 +163,7 @@ const terminal = readline.createInterface({
163163
output: process.stdout,
164164
});
165165

166-
const messages: CoreMessage[] = [];
166+
const messages: ModelMessage[] = [];
167167

168168
async function main() {
169169
while (true) {
@@ -221,7 +221,7 @@ Notice the blank "assistant" response? This is because instead of generating a t
221221

222222
```typescript highlight="47-48"
223223
import { openai } from '@ai-sdk/openai';
224-
import { CoreMessage, streamText, tool } from 'ai';
224+
import { ModelMessage, streamText, tool } from 'ai';
225225
import dotenv from 'dotenv';
226226
import { z } from 'zod';
227227
import * as readline from 'node:readline/promises';
@@ -233,7 +233,7 @@ const terminal = readline.createInterface({
233233
output: process.stdout,
234234
});
235235

236-
const messages: CoreMessage[] = [];
236+
const messages: ModelMessage[] = [];
237237

238238
async function main() {
239239
while (true) {
@@ -291,7 +291,7 @@ Modify your `index.ts` file to include the `maxSteps` option:
291291

292292
```ts filename="index.ts" highlight="39-42"
293293
import { openai } from '@ai-sdk/openai';
294-
import { CoreMessage, streamText, tool } from 'ai';
294+
import { ModelMessage, streamText, tool } from 'ai';
295295
import dotenv from 'dotenv';
296296
import { z } from 'zod';
297297
import * as readline from 'node:readline/promises';
@@ -303,7 +303,7 @@ const terminal = readline.createInterface({
303303
output: process.stdout,
304304
});
305305

306-
const messages: CoreMessage[] = [];
306+
const messages: ModelMessage[] = [];
307307

308308
async function main() {
309309
while (true) {
@@ -364,7 +364,7 @@ Update your `index.ts` file to add a new tool to convert the temperature from Ce
364364

365365
```ts filename="index.ts" highlight="38-49"
366366
import { openai } from '@ai-sdk/openai';
367-
import { CoreMessage, streamText, tool } from 'ai';
367+
import { ModelMessage, streamText, tool } from 'ai';
368368
import dotenv from 'dotenv';
369369
import { z } from 'zod';
370370
import * as readline from 'node:readline/promises';
@@ -376,7 +376,7 @@ const terminal = readline.createInterface({
376376
output: process.stdout,
377377
});
378378

379-
const messages: CoreMessage[] = [];
379+
const messages: ModelMessage[] = [];
380380

381381
async function main() {
382382
while (true) {

‎content/docs/02-guides/03-slackbot.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -318,10 +318,10 @@ Here's how to implement it:
318318

319319
```typescript filename="lib/generate-response.ts"
320320
import { openai } from '@ai-sdk/openai';
321-
import { CoreMessage, generateText } from 'ai';
321+
import { ModelMessage, generateText } from 'ai';
322322

323323
export const generateResponse = async (
324-
messages: CoreMessage[],
324+
messages: ModelMessage[],
325325
updateStatus?: (status: string) => void,
326326
) => {
327327
const { text } = await generateText({
@@ -349,12 +349,12 @@ The real power of the AI SDK comes from tools that enable your bot to perform ac
349349

350350
```typescript filename="lib/generate-response.ts"
351351
import { openai } from '@ai-sdk/openai';
352-
import { CoreMessage, generateText, tool } from 'ai';
352+
import { ModelMessage, generateText, tool } from 'ai';
353353
import { z } from 'zod';
354354
import { exa } from './utils';
355355

356356
export const generateResponse = async (
357-
messages: CoreMessage[],
357+
messages: ModelMessage[],
358358
updateStatus?: (status: string) => void,
359359
) => {
360360
const { text } = await generateText({

‎content/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -191,12 +191,12 @@ Both `generateText` and `streamText` have a `response.messages` property that yo
191191
add the assistant and tool messages to your conversation history.
192192
It is also available in the `onFinish` callback of `streamText`.
193193

194-
The `response.messages` property contains an array of `CoreMessage` objects that you can add to your conversation history:
194+
The `response.messages` property contains an array of `ModelMessage` objects that you can add to your conversation history:
195195

196196
```ts
197197
import { generateText } from 'ai';
198198

199-
const messages: CoreMessage[] = [
199+
const messages: ModelMessage[] = [
200200
// ...
201201
];
202202

‎content/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -124,14 +124,14 @@ We have enabled the `sendExtraMessageFields` option to send the id and createdAt
124124
meaning that we store messages in the `useChat` message format.
125125

126126
<Note>
127-
The `useChat` message format is different from the `CoreMessage` format. The
127+
The `useChat` message format is different from the `ModelMessage` format. The
128128
`useChat` message format is designed for frontend display, and contains
129129
additional fields such as `id` and `createdAt`. We recommend storing the
130130
messages in the `useChat` message format.
131131
</Note>
132132

133133
Storing messages is done in the `onFinish` callback of the `streamText` function.
134-
`onFinish` receives the messages from the AI response as a `CoreMessage[]`,
134+
`onFinish` receives the messages from the AI response as a `ModelMessage[]`,
135135
and we use the [`appendResponseMessages`](/docs/reference/ai-sdk-ui/append-response-messages)
136136
helper to append the AI response messages to the chat messages.
137137

‎content/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -483,12 +483,12 @@ With AI SDK UI, you will save chats using the `onFinish` callback function of `s
483483
```ts filename="@/app/api/chat/route.ts"
484484
import { openai } from '@ai-sdk/openai';
485485
import { saveChat } from '@/utils/queries';
486-
import { streamText, convertToCoreMessages } from 'ai';
486+
import { streamText, convertToModelMessages } from 'ai';
487487

488488
export async function POST(request) {
489489
const { id, messages } = await request.json();
490490

491-
const coreMessages = convertToCoreMessages(messages);
491+
const coreMessages = convertToModelMessages(messages);
492492

493493
const result = streamText({
494494
model: openai('gpt-4o'),

0 commit comments

Comments
 (0)