Endee 1.0.0 is here. After months of building and learning from our beta users, we’re releasing Endee 1.0.0 - our first production-ready release. Endee is an open-source vector database built for modern AI systems like semantic search and RAG pipelines. It is designed for large-scale AI workloads and can scale to 1B+ vectors on a single node. Our benchmarks show it can run with up to 10× less infrastructure while still outperforming existing vector databases like Pinecone, Qdrant, Weaviate and Milvus, created by Zilliz. Huge thanks to everyone who tried the beta, opened issues, and helped shape the project. This is just the beginning. Try it here: https://lnkd.in/gY7Ru5Jb If you like the project, consider giving the repo a ⭐ - it helps open-source projects grow. #OpenSource #AIInfrastructure #VectorDatabase #GenAI #RAG
Endee 1.0.0 Released: Open-Source Vector Database for AI Systems
More Relevant Posts
-
Excited to share that I just released v0.4.0 of my AI Agent Automation project. This update introduces a fully working agent-level semantic memory system, built to run local-first. What’s new: 1. Embedding-based memory storage 2. Cosine similarity retrieval 3. Similarity threshold filtering 4. Retention cap per agent 5. Ollama fallback for embeddings (no external vector DB) Agents can now persist knowledge across workflow runs and recall relevant context intelligently. Everything runs with MongoDB + pluggable LLM providers (including local models). This lays the groundwork for document-based RAG next. Open-source and self-hosted - always focused on no vendor lock-in. Happy to connect with others building local-first AI systems. #ArtificialIntelligence #OpenSource #LLM #AIEngineering #LocalFirst #RAG
To view or add a comment, sign in
-
-
Day 97/100 of Agent Memory 🧠 I am hoping more people realise this now more than ever. Harrison Chase, creator of LangChain and one of the most influential voices in the agent engineering space, said it plainly this week: "memory is a moat." Memory is what turns an AI system into an AI asset. If your organisation has not internalised that yet, the gap is already widening. The real IP is institutional memory (this can be both semantic and procedural memory, depending on the memory unit characteristics) The quoted reply in the post says it better than I could: the internal agent that knows your data models, your API quirks, your edge cases from three years of production incidents, that is the moat. The OSS framework is a commodity. The memory it runs on is the actual competitive advantage. If you have an AI leader, especially one associated with data, who still does not understand the unique benefits of treating agent memory as a first-class citizen of AI product and agent engineering, both architecturally and organisation-wide, do two things: 1️⃣ Jump ship. 2️⃣ Enrol in my 2-day bootcamp on AI Memory Management. Lol, joking. Kind of. Every major player in the agent stack has now planted a flag on memory. And many are implementing it as a foundational capability. Long story short memory is a moat and that you need to own it. I started writing about agent memory because I believed it was the most under appreciated dimension of agent engineering. I believed memory was not a nice-to-have feature, but an architectural discipline. The field has moved considerably in that time. The argument no longer needs to be made. The evidence is making it for us. If you are serious about owning Memory, then look to adding Oracle AI Database to your infra as the unified memory core for your AI Agents #100DaysOfAgentMemory #AgentMemory #AIAgents
To view or add a comment, sign in
-
-
I completely agree: memory isn't optional; it's an indispensable tool for success in the age of AI agents. And while I'm not sure if it's truly an era, given how quickly everything changes in this field, perhaps even the agents themselves could become obsolete this year; who knows?
Day 97/100 of Agent Memory 🧠 I am hoping more people realise this now more than ever. Harrison Chase, creator of LangChain and one of the most influential voices in the agent engineering space, said it plainly this week: "memory is a moat." Memory is what turns an AI system into an AI asset. If your organisation has not internalised that yet, the gap is already widening. The real IP is institutional memory (this can be both semantic and procedural memory, depending on the memory unit characteristics) The quoted reply in the post says it better than I could: the internal agent that knows your data models, your API quirks, your edge cases from three years of production incidents, that is the moat. The OSS framework is a commodity. The memory it runs on is the actual competitive advantage. If you have an AI leader, especially one associated with data, who still does not understand the unique benefits of treating agent memory as a first-class citizen of AI product and agent engineering, both architecturally and organisation-wide, do two things: 1️⃣ Jump ship. 2️⃣ Enrol in my 2-day bootcamp on AI Memory Management. Lol, joking. Kind of. Every major player in the agent stack has now planted a flag on memory. And many are implementing it as a foundational capability. Long story short memory is a moat and that you need to own it. I started writing about agent memory because I believed it was the most under appreciated dimension of agent engineering. I believed memory was not a nice-to-have feature, but an architectural discipline. The field has moved considerably in that time. The argument no longer needs to be made. The evidence is making it for us. If you are serious about owning Memory, then look to adding Oracle AI Database to your infra as the unified memory core for your AI Agents #100DaysOfAgentMemory #AgentMemory #AIAgents
To view or add a comment, sign in
-
-
This is one of those updates that actually changes how I work. I use Colab extensively so this one stood out. Google just made it possible for AI agents to run code in Colab using MCP servers. It just means: your AI can write code, run it, see what happens, and try again on its own. MCP is just the bridge that lets AI connect to tools like Colab, APIs, and databases. A simple example: an agent can load a dataset in Colab, train a model, and evaluate it end to end. It’s still early and yeah it will break sometimes. Link - https://lnkd.in/g5ZDpj2v #AI #Agents #LLM #MCP #GenAI #MachineLearning #MLOps
To view or add a comment, sign in
-
Best for showing off your knowledge of RAG and Multi-Agent Systems. Headline: Beyond the Chatbot: The power of Multi-Agent Systems. 🤖🤖 I’m currently digging into the architecture of Purple Fabric, specifically how it handles Retrieval-Augmented Generation (RAG) at scale. Unlike a standard LLM, a multi-agent setup uses a "Team" approach: 1️⃣ The Knowledge Garden: Ingesting unstructured data from S3 and Web Crawlers. 2️⃣ The Atomic Agent: Specialized bots designed for a single, primary task. 3️⃣ The ReAct Framework: Allowing agents to "Think, Act, and Observe" before giving a final answer. The result? Reduced hallucinations, transparent reasoning, and an AI that actually understands enterprise context. #GenerativeAI #LLMOps #RAG #TechStack #PurpleFabric
To view or add a comment, sign in
-
𝗡𝗲𝗿𝗱𝘀 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗘𝗰𝗼𝘀𝘆𝘴𝘵𝗲𝗺 𝗥𝗲𝗽𝗼𝗿𝘁 Nerq's AI Agent Ecosystem grew this week. You can see the numbers: - Total indexed agents, tools, and servers: 209,894 - Models and datasets indexed: 3,233,508 - Total AI assets: 4,501,094 - New agents and tools added: 65,797 One notable addition is "gsd-build/gsd-2". It has a high trust score and community support. You can find more information about gsd-2 on GitHub. Source: https://lnkd.in/ggXkV_xb Optional learning community: https://t.me/GyaanSetuAi
To view or add a comment, sign in
-
RAG = Data + Retrieval + Orchestration + Evaluation. What began as “search + LLM” is now an ecosystem of tools, frameworks, embeddings, and model strategies. RAG is not a “technique.” It’s a whole ecosystem — and a developer’s stack. Over the past 18 months, Retrieval-Augmented Generation (RAG) has quietly evolved from a simple “search + LLM” pattern into a complete engineering discipline. Today, building AI systems isn’t just about choosing a model. It’s about architecting the entire pipeline — from extraction to embeddings, from vector databases to evaluation, from closed models to open-source innovators. #RAG #AIEngineering #LLM #GenerativeAI #MachineLearning #AIArchitecture
To view or add a comment, sign in
-
-
When working with #RAGApplications, you might wonder can #RAGAS help decide which #Database to use for your system? 🤔 In this short video, we explain how #RAGASEvaluation can guide database selection by assessing performance, compatibility, and efficiency for #GenerativeAI outputs. If you’re building #RAGSystems or evaluating #LLMs, understanding how to align your database with evaluation metrics ensures more reliable and scalable #AIModels. #Carniq #AIEngineering #MachineLearning #AIEvaluation #TechInsights #ArtificialIntelligence #GenerativeAI #DataManagement
To view or add a comment, sign in
-
AI models get the hype. Secrets power the pipeline. Inference pipelines rely on API keys, database credentials, and service tokens that are often scattered across services and environments. This post breaks down how secrets move through model inference pipelines and how to secure them without slowing developers down. Read more: https://lnkd.in/ebiqk3eZ #AIInfrastructure #SecretsManagement #DevSecOps
To view or add a comment, sign in
-
-
Going all in on observability is the game changer you need when the pace with AI keeps accelerating every day. This weekend I moved my homelab to honeycomb.io and it’s been a game changer. Real deal Observability 2.0 — no more 3 pillars (logs, metrics, tracing) all in different silos, requiring 3 MCP servers to interact with using AI — it’s the future. Siloed data IS. NOT. IT. Needing 3 MCP servers just to correlate things (while your AI gives up and just aims to skip to kubectl) while customers are having a rough day ain’t it. Their single MCP + skills are great. Fixed a traceparent issue that had been bugging me for like a week that I hadn’t had time to look at. Then, it was crazy to in two prompts have a solid understanding of both how the service and underlying infrastructure were doing, all from my phone. All that with barely opening the app itself.
To view or add a comment, sign in
More from this author
Explore related topics
- What Makes Vector Search Work Well
- Innovations Driving Vector Search Technology
- How to Understand Vector Databases
- Open Source Artificial Intelligence Models
- Importance of Vector Databases for Developers
- Understanding Vector Stores in AI Systems
- Open Source AI Developments Using Llama
- How to Build Intelligent Rag Systems
- Understanding the Role of Rag in AI Applications
- Building AI Applications with Open Source LLM Models
31K followers
2wGreat! All the very best for ahead, Vineet!