Skip to content

Releases: vercel/ai

ai@5.0.0

31 Jul 15:38
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • e1cbf8a: chore(@ai-sdk/rsc): extract to separate package

  • a847c3e: chore: rename reasoning to reasoningText etc

  • 13fef90: chore (ai): remove automatic conversion of UI messages to model messages

  • d964901: - remove setting temperature to 0 by default

    • remove null option from DefaultSettingsMiddleware
    • remove setting defaults for temperature and stopSequences in ai to enable middleware changes
  • 0a710d8: feat (ui): typed tool parts in ui messages

  • 9ad0484: feat (ai): automatic tool execution error handling

  • 63f9e9b: chore (provider,ai): tools have input/output instead of args,result

  • ab7ccef: chore (ai): change source ui message parts to source-url

  • d5f588f: AI SDK 5

  • ec78cdc: chore (ai): remove "data" UIMessage role

  • 6a83f7d: refactoring (ai): restructure message metadata transfer

  • db345da: chore (ai): remove exports of internal ui functions

  • 496bbc1: chore (ui): inline/remove ChatRequest type

  • 72d7d72: chore (ai): stable activeTools

  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport

  • 98f25e5: chore (ui): remove managed chat inputs

  • 2d03e19: chore (ai): remove StreamCallbacks.onCompletion

  • da70d79: chore (ai): remove getUIText helper

  • c60f895: chore (ai): remove useChat keepLastMessageOnError

  • 0560977: chore (ai): improve consistency of generate text result, stream text result, and step result

  • 9477ebb: chore (ui): remove useAssistant hook (breaking change)

  • 1f55c21: chore (ai): send reasoning to the client by default

  • e7dc6c7: chore (ai): remove onResponse callback

  • 8b86e99: chore (ai): replace Message with UIMessage

  • 04d5063: chore (ai): rename default provider global to AI_SDK_DEFAULT_PROVIDER

  • 319b989: chore (ai): remove content from ui messages

  • 14c9410: chore: refactor file towards source pattern (spec)

  • a34eb39: chore (ai): remove data and allowEmptySubmit from ChatRequestOptions

  • f04fb4a: chore (ai): replace useChat attachments with file ui parts

  • f7e8bf4: chore (ai): flatten ui message stream parts

  • 257224b: chore (ai): separate TextStreamChatTransport

  • fd1924b: chore (ai): remove redundant mimeType property

  • 2524fc7: chore (ai): remove ui message toolInvocations property

  • 6fba4c7: chore (ai): remove deprecated experimental_providerMetadata

  • b4b4bb2: chore (ui): rename experimental_resume to resumeStream

  • 441d042: chore (ui): data stream protocol v2 with SSEs

  • ef256ed: chore (ai): refactor and use chatstore in svelte

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

  • a662dea: chore (ai): remove sendExtraMessageFields

  • d884051: feat (ai): simplify default provider setup

  • e8324c5: feat (ai): add args callbacks to tools

  • fafc3f2: chore (ai): change file to parts to use urls instead of data

  • 1ed0287: chore (ai): stable sendStart/sendFinish options

  • c7710a9: chore (ai): rename DataStreamToSSETransformStream to JsonToSseTransformStream

  • bfbfc4c: feat (ai): streamText/generateText: totalUsage contains usage for all steps. usage is for a single step.

  • 9ae327d: chore (ui): replace chat store concept with chat instances

  • 9315076: chore (ai): rename continueUntil to stopWhen. Rename maxSteps stop condition to stepCountIs.

  • 247ee0c: chore (ai): remove steps from tool invocation ui parts

  • 109c0ac: chore (ai): rename id to chatId (in post request, resume request, and useChat)

  • 954aa73: feat (ui): extended regenerate support

  • 33eb499: feat (ai): inject message id in createUIMessageStream

  • 901df02: feat (ui): use UI_MESSAGE generic

  • 4892798: chore (ai): always stream tool calls

  • c25cbce: feat (ai): use console.error as default error handler for streamText and streamObject

  • b33ed7a: chore (ai): rename DataStream_ to UIMessage_

  • ed675de: feat (ai): add ui data parts

  • 7bb58d4: chore (ai): restructure prepareRequest

  • ea7a7c9: feat (ui): UI message metadata

  • 0463011: fix (ai): update source url stream part

  • dcc549b: remove StreamTextResult.mergeIntoDataStream method
    rename DataStreamOptions.getErrorMessage to onError
    add pipeTextStreamToResponse function
    add createTextStreamResponse function
    change createDataStreamResponse function to accept a DataStream and not a DataStreamWriter
    change pipeDataStreamToResponse function to accept a DataStream and not a DataStreamWriter
    change pipeDataStreamToResponse function to have a single parameter

  • 35fc02c: chore (ui): rename RequestOptions to CompletionRequestOptions

  • 64f6d64: feat (ai): replace maxSteps with continueUntil (generateText)

  • 175b868: chore (ai): rename reasoning UI parts 'reasoning' property to 'text'

  • 60e2c56: feat (ai): restructure chat transports

  • 765f1cd: chore (ai): remove deprecated useChat isLoading helper

  • cb2b53a: chore (ai): refactor header preparation

  • e244a78: chore (ai): remove StreamData and mergeStreams

  • d306260: feat (ai): replace maxSteps with continueUntil (streamText)

  • 4bfe9ec: chore (ai): remove ui message reasoning property

  • 1766ede: chore: rename maxTokens to maxOutputTokens

  • 2877a74: chore (ai): remove ui message data property

  • 1409e13: chore (ai): remove experimental continueSteps

  • b32e192: chore (ai): rename reasoning to reasoningText, rename reasoningDetails to reasoning (streamText, generateText)

  • 92cb0a2: chore (ai): rename CoreMessage to ModelMessage

  • 2b637d6: chore (ai): rename UIMessageStreamPart to UIMessageChunk

Minor Changes

  • b7eae2d: feat (core): Add finishReason field to NoObjectGeneratedError
  • bcea599: feat (ai): add content to generateText result
  • 48d675a: feat (ai): add content to streamText result
  • c9ad635: feat (ai): add filename to file ui parts

Patch Changes

  • a571d6e: chore(provider-utils): move ToolResultContent to provider-utils

  • de2d2ab: feat(ai): add provider and provider registry middleware functionality

  • c22ad54: feat(smooth-stream): chunking callbacks

  • d88455d: feat (ai): expose http chat transport type

  • e7fcc86: feat (ai): introduce dynamic tools

  • da1e6f0: feat (ui): add generics to ui message stream parts

  • 48378b9: fix (ai): send null as tool output when tools return undefined

  • 5d1e3ba: chore (ai): remove provider re-exports

  • 93d53a1: chore (ai): remove cli

  • e90d45d: chore (rsc): move HANGING_STREAM_WARNING_TIME constant into @ai-sdk/rsc package

  • b32c141: feat (ai): add array support to stopWhen

  • bc3109f: chore (ai): push stream-callbacks into langchain/llamaindex adapters

  • 0d9583c: fix (ai): use user-provided media type when available

  • 38ae5cc: feat (ai): export InferUIMessageChunk type

  • 10b21eb: feat(cli): add ai command line interface

  • 9e40cbe: Allow destructuring output and errorText on ToolUIPart type

  • 6909543: feat (ai): support system parameter in Agent constructor

  • 86cfc72: feat (ai): add ignoreIncompleteToolCalls option to convertToModelMessages

  • 377bbcf: fix (ui): tool input can be undefined during input-streaming

  • d8aeaef: feat(providers/fal): add transcribe

  • ae77a99: chore (ai): rename text and reasoning chunks in streamText fullstream

  • 4fef487: feat: support for zod v4 for schema validation

    All these methods now accept both a zod v4 and zod v3 schemas for validation:

    • generateObject()
    • streamObject()
    • generateText()
    • experimental_useObject() from @ai-sdk/react
    • streamUI() from @ai-sdk/rsc
  • b1e3abd: feat (ai): expose ui message stream headers

  • 4f3e637: fix (ui): avoid caching globalThis.fetch in case it is patched by other libraries

  • 14cb3be: chore(providers/llamaindex): extract to separate package

  • 1f6ce57: feat (ai): infer tool call types in the onToolCall callback

  • 16ccfb2: feat (ai): add readUIMessageStream helper

  • 225f087: fix (ai/mcp): prevent mutation of customEnv

  • ce1d1f3: feat (ai): export mock image, speech, and transcription models

  • fc0380b: feat (ui): resolvable header, body, credentials in http chat transport

  • 6622441: feat (ai): add static/dynamic toolCalls/toolResults helpers

  • 4048ce3: fix (ai): add tests and examples for openai responses

  • 6c42e56: feat (ai): validate ui stream data chunks

  • bedb239: chore (ai): make ui stream parts value optional when it's not required

  • 9b4d074: feat(streamObject): add enum support

  • c8fce91: feat (ai): add experimental Agent abstraction

  • 655cf3c: feat (ui): add onFinish to createUIMessageStream

  • 3e10408: fix(utils/detect-mimetype): add support for detecting id3 tags

  • d5ae088: feat (ui): add sendAutomaticallyWhen to Chat

  • ced8eee: feat(ai): re-export zodSchema from main package

  • c040e2f: fix (ui): inject generated response message id

  • d3960e3: selectTelemetryAttributes more robustness

  • faea29f: fix (provider/openai): multi-step reasoning with text

  • 66af894: fix (ai): respect content order in toResponseMessages

  • 332167b: chore (ai): move maxSteps into UseChatOptions

  • 6b1c55c: feat (ai): introduce GLOBAL_DEFAULT_PROVIDER

  • 5a975a4: feat (ui): update Chat tool result submission

  • 507ac1d: fix (ui/react): update messag...

Read more

@ai-sdk/xai@2.0.0

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Minor Changes

Patch Changes

  • 41cab5c: fix(providers/xai): edit supported models for structured output
  • fa49207: feat(providers/openai-compatible): convert to providerOptions
  • cf8280e: fix(providers/xai): return actual usage when streaming instead of NaN
  • e2aceaf: feat: add raw chunk support
  • eb173f1: chore (providers): remove model shorthand deprecation warnings
  • 9301f86: refactor (image-model): rename ImageModelV1 to ImageModelV2
  • 6d835a7: fix (provider/grok): filter duplicated reasoning chunks
  • d9b26f2: chore (providers/xai): update grok-3 model aliases
  • 66b9661: feat (provider/xai): export XaiProviderOptions
  • 9e986f7: feat (provider/xai): add grok-4 model id
  • d1a034f: feature: using Zod 4 for internal stuff
  • 107cd62: Add native XAI chat language model implementation
  • 205077b: fix: improve Zod compatibility
  • a7d3fbd: feat (providers/xai): add grok-3 models
  • Updated dependencies [a571d6e]
  • Updated dependencies [742b7be]
  • Updated dependencies [e7fcc86]
  • Updated dependencies [7cddb72]
  • Updated dependencies [ccce59b]
  • Updated dependencies [e2b9e4b]
  • Updated dependencies [95857aa]
  • Updated dependencies [45c1ea2]
  • Updated dependencies [6f6bb89]
  • Updated dependencies [060370c]
  • Updated dependencies [dc714f3]
  • Updated dependencies [b5da06a]
  • Updated dependencies [d1a1aa1]
  • Updated dependencies [63f9e9b]
  • Updated dependencies [5d142ab]
  • Updated dependencies [d5f588f]
  • Updated dependencies [e025824]
  • Updated dependencies [0571b98]
  • Updated dependencies [6db02c9]
  • Updated dependencies [b6b43c7]
  • Updated dependencies [4fef487]
  • Updated dependencies [48d257a]
  • Updated dependencies [0c0c0b3]
  • Updated dependencies [0d2c085]
  • Updated dependencies [fa49207]
  • Updated dependencies [40acf9b]
  • Updated dependencies [cf8280e]
  • Updated dependencies [9222aeb]
  • Updated dependencies [b9a6121]
  • Updated dependencies [e2aceaf]
  • Updated dependencies [411e483]
  • Updated dependencies [8ba77a7]
  • Updated dependencies [db72adc]
  • Updated dependencies [7b3ae3f]
  • Updated dependencies [a166433]
  • Updated dependencies [26735b5]
  • Updated dependencies [443d8ec]
  • Updated dependencies [42e32b0]
  • Updated dependencies [a8c8bd5]
  • Updated dependencies [abf9a79]
  • Updated dependencies [14c9410]
  • Updated dependencies [e86be6f]
  • Updated dependencies [9bf7291]
  • Updated dependencies [2e13791]
  • Updated dependencies [7b069ed]
  • Updated dependencies [9f95b35]
  • Updated dependencies [66962ed]
  • Updated dependencies [0d06df6]
  • Updated dependencies [472524a]
  • Updated dependencies [dd3ff01]
  • Updated dependencies [d9209ca]
  • Updated dependencies [d9c98f4]
  • Updated dependencies [05d2819]
  • Updated dependencies [9301f86]
  • Updated dependencies [0a87932]
  • Updated dependencies [737f1e2]
  • Updated dependencies [c4a2fec]
  • Updated dependencies [957b739]
  • Updated dependencies [79457bd]
  • Updated dependencies [a3f768e]
  • Updated dependencies [7435eb5]
  • Updated dependencies [8aa9e20]
  • Updated dependencies [4617fab]
  • Updated dependencies [516be5b]
  • Updated dependencies [ac34802]
  • Updated dependencies [0054544]
  • Updated dependencies [cb68df0]
  • Updated dependencies [ad80501]
  • Updated dependencies [68ecf2f]
  • Updated dependencies [9e9c809]
  • Updated dependencies [32831c6]
  • Updated dependencies [6dc848c]
  • Updated dependencies [6b98118]
  • Updated dependencies [d0f9495]
  • Updated dependencies [63d791d]
  • Updated dependencies [87b828f]
  • Updated dependencies [3f2f00c]
  • Updated dependencies [bfdca8d]
  • Updated dependencies [0ff02bb]
  • Updated dependencies [7979f7f]
  • Updated dependencies [39a4fab]
  • Updated dependencies [44f4aba]
  • Updated dependencies [9bd5ab5]
  • Updated dependencies [57edfcb]
  • Updated dependencies [faf8446]
  • Updated dependencies [7ea4132]
  • Updated dependencies [d1a034f]
  • Updated dependencies [5c56081]
  • Updated dependencies [fd65bc6]
  • Updated dependencies [023ba40]
  • Updated dependencies [ea7a7c9]
  • Updated dependencies [1b101e1]
  • Updated dependencies [26535e0]
  • Updated dependencies [e030615]
  • Updated dependencies [5e57fae]
  • Updated dependencies [393138b]
  • Updated dependencies [c57e248]
  • Updated dependencies [88a8ee5]
  • Updated dependencies [41fa418]
  • Updated dependencies [205077b]
  • Updated dependencies [71f938d]
  • Updated dependencies [3795467]
  • Updated dependencies [28a5ed5]
  • Updated dependencies [7182d14]
  • Updated dependencies [c1e6647]
  • Updated dependencies [1766ede]
  • Updated dependencies [811dff3]
  • Updated dependencies [f10304b]
  • Updated dependencies [dd5fd43]
  • Updated dependencies [33f4a6a]
  • Updated dependencies [383cbfa]
  • Updated dependencies [27deb4d]
  • Updated dependencies [c4df419]
  • Updated dependencies [281bb1c]
    • @ai-sdk/provider-utils@3.0.0
    • @ai-sdk/provider@2.0.0
    • @ai-sdk/openai-compatible@1.0.0

@ai-sdk/vue@2.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • 0a710d8: feat (ui): typed tool parts in ui messages
  • d5f588f: AI SDK 5
  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport
  • 98f25e5: chore (ui): remove managed chat inputs
  • 9477ebb: chore (ui): remove useAssistant hook (breaking change)
  • 901df02: feat (ui): use UI_MESSAGE generic
  • 98f25e5: chore (ui/vue): replace useChat with new Chat
  • 8cbbad6: chore (ai): refactor and use chatstore in vue

Patch Changes

Read more

@ai-sdk/vercel@1.0.0

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Patch Changes

@ai-sdk/valibot@1.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

Patch Changes

@ai-sdk/togetherai@1.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Patch Changes

@ai-sdk/svelte@3.0.0

31 Jul 15:40
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • 0a710d8: feat (ui): typed tool parts in ui messages
  • d5f588f: AI SDK 5
  • 496bbc1: chore (ui): inline/remove ChatRequest type
  • 40acf9b: feat (ui): introduce ChatStore and ChatTransport
  • 98f25e5: chore (ui): remove managed chat inputs
  • 901df02: feat (ui): use UI_MESSAGE generic

Patch Changes

@ai-sdk/rsc@1.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • e1cbf8a: chore(@ai-sdk/rsc): extract to separate package

Patch Changes

Read more

@ai-sdk/revai@1.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Patch Changes

@ai-sdk/replicate@1.0.0

31 Jul 15:39
a5e92fe
Compare
Choose a tag to compare

Major Changes

  • d5f588f: AI SDK 5

  • 516be5b: ### Move Image Model Settings into generate options

    Image Models no longer have settings. Instead, maxImagesPerCall can be passed directly to generateImage(). All other image settings can be passed to providerOptions[provider].

    Before

    await generateImage({
      model: luma.image('photon-flash-1', {
        maxImagesPerCall: 5,
        pollIntervalMillis: 500,
      }),
      prompt,
      n: 10,
    });

    After

    await generateImage({
      model: luma.image('photon-flash-1'),
      prompt,
      n: 10,
      maxImagesPerCall: 5,
      providerOptions: {
        luma: { pollIntervalMillis: 5 },
      },
    });

    Pull Request: #6180

Patch Changes