Skip to content

Conversation

@sborenst
Copy link

Written with Cursor, works on my machine.

Pull Request: Fix Vector Store Configuration & Add Ollama Support

Summary

This PR fixes critical bugs preventing OpenMemory from working and adds comprehensive Ollama integration documentation.

Problems Fixed

1. Missing Vector Store Configuration (Critical)

Issue: Fresh installations fail with "Memory client is not available" error because vector store configuration is missing.

Root Cause: config.py line 72 returns "vector_store": None, which gets saved to the database and prevents the memory client from initializing.

Files Changed:

  • openmemory/api/app/routers/config.py - Added proper Qdrant configuration
  • openmemory/api/config.json - Added vector_store section for consistency
  • openmemory/api/default_config.json - Added vector_store section for consistency

2. Incorrect Volume Mount

Issue: Docker Compose volume mount overwrites the entire application directory, causing "Could not import module main" errors.

Root Cause: Mounting ./api:/usr/src/openmemory replaces the Python code built into the container.

Fix: Changed to only mount the data directory: ./api/data:/usr/src/openmemory/data

Files Changed:

  • openmemory/docker-compose.yml - Fixed volume mount path

3. Environment Variable Precedence

Issue: Database configuration overrides environment variables, preventing compose files from controlling Qdrant connection settings.

Root Cause: Database config always overwrote environment-based config, even when QDRANT_HOST and QDRANT_PORT were set.

Fix: Environment variables now take precedence over database configuration for vector store settings, allowing compose files to control connection settings.

Files Changed:

  • openmemory/api/app/utils/memory.py - Added environment variable precedence check

Enhancements

Ollama Integration Documentation

Added comprehensive guide for switching from OpenAI to Ollama for local AI processing.

Files Added:

  • openmemory/OLLAMA.md - Complete Ollama integration guide
  • openmemory/podman-compose.yml - Podman-compatible compose file with SELinux support

Covers:

  • Model installation
  • Configuration steps
  • Dimension handling (1536 vs 768)
  • Testing procedures
  • Troubleshooting
  • Switching back to OpenAI
  • Podman support

Testing

Tested Scenarios

✅ Fresh install with fixed configuration
✅ Memory creation with OpenAI
✅ Switching to Ollama
✅ Memory creation with Ollama (embeddings working)
✅ Environment variable precedence
✅ Podman compatibility
✅ Service name resolution (mem0_store)
✅ Host gateway access (host.containers.internal)
✅ Data persistence across restarts

Test Commands

# Clean install test
rm -rf api/data/
docker-compose up -d
sleep 30

# Create memory (should work without reset)
curl -X POST http://localhost:8765/api/v1/memories/ \
  -H "Content-Type: application/json" \
  -d '{"user_id": "test", "text": "Testing fixed configuration"}'

# Verify
curl http://localhost:8765/api/v1/memories/?user_id=test

Impact

Before This PR

  • ❌ Fresh installations don't work
  • ❌ Requires manual config reset workaround
  • ❌ Volume mount causes startup failures
  • ❌ No documentation for Ollama integration
  • ❌ Users stuck and confused

After This PR

  • ✅ Fresh installations work immediately
  • ✅ No manual configuration needed
  • ✅ Stable container startup
  • ✅ Clear Ollama integration path
  • ✅ Better user experience

Files Changed

openmemory/
├── api/
│   ├── app/
│   │   ├── routers/config.py       # Fixed vector_store default
│   │   └── utils/memory.py         # Environment variable precedence
│   ├── config.json                 # Added vector_store section
│   └── default_config.json         # Added vector_store section
├── docker-compose.yml              # Fixed volume mount, added network
├── podman-compose.yml              # New: Podman support
└── OLLAMA.md                       # New: Ollama guide

Backward Compatibility

Fully backward compatible

  • Existing installations continue working
  • No breaking changes to API
  • Configuration format unchanged
  • Only adds missing required configuration

Additional Notes

Why Vector Store Config is Critical

Mem0 requires vector store configuration to initialize the memory client. Without it:

  1. Memory() constructor fails
  2. No memories can be created
  3. API returns "Memory client is not available"
  4. System is unusable

Why Volume Mount Matters

The current volume mount (./api:/usr/src/openmemory) overwrites:

  • All Python dependencies installed in the container
  • The application code itself
  • Required system files

This causes immediate startup failures. The fix mounts only the data directory, preserving the container's built code while maintaining data persistence.

Ollama Integration Benefits

  • Cost: Free vs paid OpenAI API
  • Privacy: Local processing, no data sent externally
  • Offline: Works without internet
  • Trade-off: ~2x slower than OpenAI, but acceptable for personal use

Checklist

  • Code changes tested locally
  • Documentation added
  • Backward compatible
  • No breaking changes
  • Ready for review

Known Issues

See KNOWN_ISSUES.md for details on:

  • Memory categorization with Ollama (non-blocking, core functionality works)

Future Improvements (Not in This PR)

  • Automated switching script
  • Fix memory categorization with Ollama
  • Environment variable templates
  • Additional LLM provider documentation
@CLAassistant
Copy link

CLAassistant commented Dec 27, 2025

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants