31 questions
0
votes
0
answers
104
views
How can I persist a db for MultiVectorRetriever?
I am trying to build a RAG from pdfs where I extract the text and tables. I want to use a persistent db in order to store the chunks, tables, embeddings e.t.c. and then reload the db and use the ...
0
votes
0
answers
73
views
Error embedding content: 504 Deadline Exceeded
So when i try to generate embeddings from two different types of codes -
here is the one which is mentioned on the langchain site but this gives me deadline exceed
@lru_cache
def get_settings():
...
0
votes
1
answer
482
views
How to properly initialize and query a PGVectorStore with metadata columns in LangChain?
I'm trying to use PGVectorStore in LangChain with metadata columns, following the example in pypi page, but I'm encountering issues when attempting to add and query documents with metadata. The basic ...
0
votes
1
answer
54
views
problem with prompting using FAISS as a vector store to classify dialogs based on sentiment emotions
I have a .csv dataset consisting of text dialog between two people and the rating of the related emaotions:
| Text_Dialog | joy | anger | sad | happy |
|--------------------|-----|-------|-----|...
0
votes
0
answers
116
views
Llama Index Vector Store: filter a list of documents with a NOT feature
I have a vector store of documents, each document is a json document with features. I'd like to filter the documents according to some criteria. The problem is that some of the documents contain a NOT ...
0
votes
0
answers
113
views
How to expand context window based on metadata of the vector-store collection
I have a working RAG code, using Langchain and Milvus. Now I'd like to add the feature to look at the metadata of each of the extracted k documents, and do the following:
find the paragraph_id of ...
0
votes
1
answer
125
views
Langchain Cohere embeddings says "invalid type: parameter texts is of type object but should be of type string" inspite of receiving strings
This part of the code, specifically the part where rag_chain is invoked causes an error:
Retrying langchain_cohere.embeddings.CohereEmbeddings.embed_with_retry.<locals>._embed_with_retry in 4.0 ...
0
votes
0
answers
25
views
How do I use MergeDataLoader to tolerate multiple files that could be in either PDF or docx format?
I am writing a RAG chatbot that retrieves information from a given list of documents. The documents can be found in a set folder, and they could be either .pdf or .docx. I want to merge all the ...
1
vote
0
answers
242
views
Vector dimensions mismatch in OpenSearch vector store using BedrockEmbeddings
I'm using a vector store that I've created in AWS OpenSearch serverless. It has one index that has below configurations:
- Engine: faiss
- Precision: Binary
- Dimensions: 1024
- Distance Type: ...
0
votes
1
answer
1k
views
How to Check if a Document Exists in a Chroma Vectorstore Using LangChain?
I am using a vectorstore of some documents in Chroma and implemented everything using the LangChain package. Here’s the package I am using:
from langchain_chroma import Chroma
I need to check if a ...
0
votes
1
answer
297
views
Problem Setting up a FAISS vector memory in Python with embeddings
I'm trying to run an LLM locally and feed it with the contents of a very large PDF. I have decided to try this via a RAG. For this I wanted to create a vectorstore, which contains the content of the ...
2
votes
1
answer
234
views
vector store stuck with file counts in_progress or vector store is empty
I am trying to upload 2 json files into an assistants vector store using the official openAI python library. I also want to use a specific chunking strategy, and a different one for each files.
There ...
1
vote
1
answer
372
views
Is there a way to load a saved SKLearn VectorStore using langchain?
I created and saved a vectorstore using langchain_community.vectorstores SKLearnVectorStore and I can't load it.
I created and saved vectorstore as below:
from langchain_community.vectorstores import ...
0
votes
1
answer
107
views
How to query the vector database in LangChain AgentExecutor, invoice before summarizing the 'Final Answer' after all tools have been called?
How to query the vector database in LangChain AgentExecutor, invoice before summarizing the 'Final Answer' after all tools have been called?
LangChain AgentExecutor code:
llm = ChatOpenAI()
tools = [...
0
votes
1
answer
369
views
Issue with Storing and Loading Index Timescale Vector Llama Index
I'm currently working with the llama_index Python package and using the llama-index-vector-stores-timescalevector extension to manage my vectors with Timescale. However, I’ve encountered an issue ...