Skip to content

Conversation

@yliu7949
Copy link
Contributor

@yliu7949 yliu7949 commented Dec 26, 2025

💻 Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • ✅ test
  • 📝 docs
  • 🔨 chore

🔀 Description of Change

Add MiniMax-M2.1 and GLM-4.7 these two new models for Qiniu provider.

🧪 How to Test

  • Tested locally
  • Added/updated tests
  • No tests needed

📝 Additional Information

Summary by Sourcery

Add new Qiniu chat models and update existing model metadata.

New Features:

  • Register the MiniMax M2.1 chat model for the Qiniu provider with pricing and capabilities metadata.
  • Introduce the GLM-4.7 chat model entry for the Qiniu provider with configuration and pricing details.

Enhancements:

  • Update the existing GLM-4.6 Qiniu model entry to reflect its status alongside the new GLM-4.7 model.
@vercel
Copy link

vercel bot commented Dec 26, 2025

@yliu7949 is attempting to deploy a commit to the LobeHub OSS Team on Vercel.

A member of the Team first needs to authorize it.

@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Dec 26, 2025
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 26, 2025

Reviewer's Guide

Adds two new Qiniu chat model definitions (MiniMax-M2.1 and GLM-4.7) and adjusts the existing GLM-4.6 entry to keep it alongside the new GLM-4.7 model in the Qiniu provider model bank.

File-Level Changes

Change Details Files
Register MiniMax-M2.1 as a new Qiniu chat model with pricing and capabilities metadata.
  • Add a new AIChatModelCard entry for MiniMax M2.1 with function calling, reasoning, and search abilities enabled
  • Configure context window, max output tokens, pricing in CNY for text input/output, and release date metadata
  • Set search implementation to use parameter-based search and mark the model as enabled
packages/model-bank/src/aiModels/qiniu.ts
Introduce GLM-4.7 as the new flagship GLM model for Qiniu and keep GLM-4.6 as a separate model entry.
  • Replace the previous GLM-4.6 inline metadata with a new GLM-4.7 AIChatModelCard entry, including description, display name, id, capabilities, pricing, settings, and release date
  • Reintroduce GLM-4.6 as its own AIChatModelCard entry following GLM-4.7, with updated description text but preserving its original capabilities and limits
  • Ensure both GLM-4.7 and GLM-4.6 are enabled Qiniu chat models with reasoning, search, and function call abilities configured
packages/model-bank/src/aiModels/qiniu.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gru-agent
Copy link
Contributor

gru-agent bot commented Dec 26, 2025

TestGru Assignment

Summary

Link CommitId Status Reason
Detail a9d3481 🚫 Skipped No files need to be tested {"packages/model-bank/src/aiModels/qiniu.ts":"The code does not contain any functions or classes."}

History Assignment

Tip

You can @gru-agent and leave your feedback. TestGru will make adjustments based on your input

@dosubot dosubot bot added the Model Provider Model provider related label Dec 26, 2025
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location> `packages/model-bank/src/aiModels/qiniu.ts:49-57` </location>
<code_context>
+        { name: 'textOutput', rate: 8.4, strategy: 'fixed', unit: 'millionTokens' },
+      ],
+    },
+    releasedAt: '2025-12-24',
+    settings: {
+      searchImpl: 'params',
</code_context>

<issue_to_address>
**issue (bug_risk):** Check whether the `releasedAt` years are intended to be in the future or if `2024-…` was meant.

Both new entries use `releasedAt: '2025-12-24'` and `'2025-12-23'`. If these are meant to represent already-released models (or mirror provider metadata), they likely should be `2024-12-24` / `2024-12-23`. Future dates may break any ordering or feature-flag logic that depends on `releasedAt` being correct.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
@codecov
Copy link

codecov bot commented Dec 26, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 80.31%. Comparing base (c00dbeb) to head (a9d3481).

Additional details and impacted files
@@           Coverage Diff           @@
##             next   #10982   +/-   ##
=======================================
  Coverage   80.31%   80.31%           
=======================================
  Files         980      980           
  Lines       66983    66983           
  Branches     8780     8813   +33     
=======================================
  Hits        53800    53800           
  Misses      13183    13183           
Flag Coverage Δ
app 73.10% <ø> (ø)
database 98.25% <ø> (ø)
packages/agent-runtime 98.08% <ø> (ø)
packages/context-engine 91.61% <ø> (ø)
packages/conversation-flow 98.05% <ø> (ø)
packages/electron-server-ipc 93.76% <ø> (ø)
packages/file-loaders 92.21% <ø> (ø)
packages/model-bank 100.00% <ø> (ø)
packages/model-runtime 89.60% <ø> (ø)
packages/prompts 79.17% <ø> (ø)
packages/python-interpreter 96.50% <ø> (ø)
packages/utils 95.31% <ø> (ø)
packages/web-crawler 96.81% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 73.05% <ø> (ø)
Services 56.44% <ø> (ø)
Server 75.19% <ø> (ø)
Libs 39.30% <ø> (ø)
Utils 82.05% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Model Provider Model provider related size:M This PR changes 30-99 lines, ignoring generated files.

1 participant