Skip to main content
0 votes
0 answers
104 views

I am trying to build a RAG from pdfs where I extract the text and tables. I want to use a persistent db in order to store the chunks, tables, embeddings e.t.c. and then reload the db and use the ...
AndCh's user avatar
  • 339
0 votes
0 answers
73 views

So when i try to generate embeddings from two different types of codes - here is the one which is mentioned on the langchain site but this gives me deadline exceed @lru_cache def get_settings(): ...
Akshat Soni's user avatar
0 votes
1 answer
482 views

I'm trying to use PGVectorStore in LangChain with metadata columns, following the example in pypi page, but I'm encountering issues when attempting to add and query documents with metadata. The basic ...
ndrini's user avatar
  • 103
0 votes
1 answer
54 views

I have a .csv dataset consisting of text dialog between two people and the rating of the related emaotions: | Text_Dialog | joy | anger | sad | happy | |--------------------|-----|-------|-----|...
user1319236's user avatar
0 votes
0 answers
116 views

I have a vector store of documents, each document is a json document with features. I'd like to filter the documents according to some criteria. The problem is that some of the documents contain a NOT ...
Gino's user avatar
  • 923
0 votes
0 answers
113 views

I have a working RAG code, using Langchain and Milvus. Now I'd like to add the feature to look at the metadata of each of the extracted k documents, and do the following: find the paragraph_id of ...
ArieAI's user avatar
  • 512
0 votes
1 answer
125 views

This part of the code, specifically the part where rag_chain is invoked causes an error: Retrying langchain_cohere.embeddings.CohereEmbeddings.embed_with_retry.<locals>._embed_with_retry in 4.0 ...
Akshitha Rao's user avatar
0 votes
0 answers
25 views

I am writing a RAG chatbot that retrieves information from a given list of documents. The documents can be found in a set folder, and they could be either .pdf or .docx. I want to merge all the ...
Gabriel Diaz de Leon's user avatar
1 vote
0 answers
242 views

I'm using a vector store that I've created in AWS OpenSearch serverless. It has one index that has below configurations: - Engine: faiss - Precision: Binary - Dimensions: 1024 - Distance Type: ...
Dixit Tilaji's user avatar
0 votes
1 answer
1k views

I am using a vectorstore of some documents in Chroma and implemented everything using the LangChain package. Here’s the package I am using: from langchain_chroma import Chroma I need to check if a ...
s.espriz's user avatar
0 votes
1 answer
297 views

I'm trying to run an LLM locally and feed it with the contents of a very large PDF. I have decided to try this via a RAG. For this I wanted to create a vectorstore, which contains the content of the ...
Pantastix's user avatar
  • 428
2 votes
1 answer
234 views

I am trying to upload 2 json files into an assistants vector store using the official openAI python library. I also want to use a specific chunking strategy, and a different one for each files. There ...
user28146142's user avatar
1 vote
1 answer
372 views

I created and saved a vectorstore using langchain_community.vectorstores SKLearnVectorStore and I can't load it. I created and saved vectorstore as below: from langchain_community.vectorstores import ...
aliarda's user avatar
  • 33
0 votes
1 answer
107 views

How to query the vector database in LangChain AgentExecutor, invoice before summarizing the 'Final Answer' after all tools have been called? LangChain AgentExecutor code: llm = ChatOpenAI() tools = [...
chenkun's user avatar
  • 75
0 votes
1 answer
369 views

I'm currently working with the llama_index Python package and using the llama-index-vector-stores-timescalevector extension to manage my vectors with Timescale. However, I’ve encountered an issue ...
Gianluca Baglini's user avatar

15 30 50 per page