fix: Use configured LLM model in categorization instead of hardcoded gpt-4o-mini #3625
+215
−3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR fixes a critical issue in OpenMemory's categorization function where the model was hardcoded to
gpt-4o-mini, causing failures when users configured alternative LLM providers like SiliconFlow.The Problem:
openmemory/api/app/utils/categorization.pyused a hardcodedgpt-4o-minimodeldeepseek-ai/DeepSeek-R1) received "Model does not exist" errorsThe Solution:
openai_base_urlparameter (enables SiliconFlow.cndomain and other custom endpoints)SiliconFlowConfigclass for proper type-safe configurationLlmFactoryto useSiliconFlowConfiginstead of genericBaseLlmConfigDependencies:
openaiPython package (no new dependencies)Fixes #3576
Type of change
How Has This Been Tested?
Manual Testing:
{ "mem0": { "llm": { "provider": "openai", "config": { "model": "deepseek-ai/DeepSeek-R1", "openai_base_url": "https://api.siliconflow.cn/v1", "api_key": "sk-..." } } } }