Skip to content

Conversation

@acivan
Copy link
Contributor

@acivan acivan commented Oct 24, 2025

This pull request introduces robust support for tracking and recording streaming conversations with OpenAI models in Memori. It adds a streaming proxy utility to aggregate streamed responses and ensures that conversations are correctly recorded for enabled Memori instances, both synchronously and asynchronously. Comprehensive tests are included to validate the streaming proxy's behavior.

Streaming support and integration:

  • Added create_openai_streaming_proxy utility for aggregating OpenAI streaming responses and invoking a finalize callback when streaming completes. This enables Memori to record conversations after the full streamed response is received. (memori/utils/streaming_proxy.py, memori/utils/__init__.py) [1] [2]
  • Integrated streaming proxy into OpenAI integration: streaming responses are now wrapped and recorded for enabled Memori instances, with dedicated handling for both synchronous and asynchronous flows. (memori/integrations/openai_integration.py) [1] [2] [3] [4]

Testing and examples:

  • Added thorough unit tests for the streaming proxy, covering both sync and async streaming scenarios to ensure correct aggregation and callback invocation. (tests/openai_support/test_streaming_proxy.py)
  • Provided an example script demonstrating how to use Memori with OpenAI's async streaming API, including memory tracking and user interaction. (examples/supported_llms/openai_async_custom_example.py)

Async integration improvements:

  • Updated async option preparation to properly await the original method, ensuring correct context injection for enabled Memori instances. (memori/integrations/openai_integration.py)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

1 participant