m1rl0k / Context-Engine Star 191 Code Issues Pull requests Discussions Context-Engine: MCP retrieval stack for AI coding assistants. Hybrid code search (dense + lexical + reranker), ReFRAG micro-chunking, local LLM prompt enhancement, and dual SSE/RMCP endpoints. One command deploys Qdrant-powered indexing for Cursor, Windsurf, Roo, Cline, Codex, and any MCP client. docker ai mcp decoder context cursor glm codex ai-agents windsurf openai-api context-engine llm llm-inference ollama-api mcp-server refrag Updated Jan 1, 2026 Python
Shaivpidadi / refrag Sponsor Star 18 Code Issues Pull requests REFRAG: LLM-powered representations for better RAG retrieval. Improve precision, reduce context size, same speed. nlp machine-learning ai embeddings openai semantic-search rag llm retrieval-augmented-generation refrag meta-superintelligence-lab Updated Dec 29, 2025 Python