331 questions
1
vote
1
answer
102
views
llama_index Ollama misloading model issue
I'm working with LlamaIndex and trying to run LLM in local using Ollama. I pulled phi3:mini model successfully and tested it in terminal. but if I load it with llama_index.llms.ollama, it keeps ...
5
votes
0
answers
57
views
ModuleNotFoundError: No module named 'source_salesforce' when using AirbyteSalesforceReader
I am trying to use the class AirbyteSalesforceReader from the project llama-index-readers-airbyte-salesforce (version 0.4.1). However, when I do so, e.g. in the following example:
from llama_index....
1
vote
1
answer
50
views
How do I apply multiple filters with different conditions using LlamaIndex's MetadataFilters
I am using LlamaIndex's MetadataFilters to apply a filter to my ChromaDB VectorStoreIndex as a query engine. I am able to set multiple filters if they are using the same FilterCondition but how would ...
0
votes
0
answers
27
views
How to exclude metadata from embedding?
I'm using LlamaIndex 0.14.7. I would like to embed document text without concatenating metadata, because I put a long text in metadata. Here's my code:
table_vec_store: SimpleVectorStore = ...
0
votes
0
answers
49
views
AgentWorkflow doesn't call functions when using Ollama
I modified the example from the LlamaIndex documentation: Single Agent Workflow Example to work with a local LLM using the @llamaindex/ollama adapter package.
import { tool } from 'llamaindex';
import ...
0
votes
0
answers
32
views
Why doesn't SqlModel preform insert query in Llamaindex tool
I have an ai agent made with Python Llamaindex, FastAPI and SqlModel. I want to log the ToolCall in "search_documents" function but it never works.
The only problem is that ToolCall is not ...
1
vote
2
answers
455
views
How can I match the token count used by BGE-M3 embedding model before embedding?
For my particular project, it would be very helpful to know how many tokens the BGE-M3 embedding model would break a string down into before I embed the text. I could embed the string and count the ...
2
votes
0
answers
93
views
How to Run an Open-Source 20B Model locally? [closed]
I have gpt oss 20b model's weights locally.
What are the necessary steps to run a 20B model using transformers.
in files that I downloaded is multi safetensor files. and also a .bin file.
which one of ...
0
votes
1
answer
270
views
How to run LlamaIndex ReAct agent with gpt-oss? Getting "SyntaxError: 'async for' outside async function"
I am trying to create a ReAct agent in LlamaIndex using a local gpt-oss-20b model.
I have successfully loaded my local model using HuggingFaceLLM from llama_index.llms.huggingface and it seems to be ...
1
vote
1
answer
124
views
LlamaIndex Python: Metadata filter with `None` value does not retrieve documents
I’m working with LlamaIndex in Python and ran into an issue with metadata filtering.
I have a TextNode that includes a metadata field explicitly set to None.
When I try to retrieve it using a metadata ...
0
votes
0
answers
58
views
Agent not using both tool specs
I've created an Agent using llama index. When I specify only one tool spec, it works correctly. However, when I try to use two, one is ignored.
import asyncio
import logging
import os
from dotenv ...
0
votes
0
answers
103
views
Python Flask App with LlamaIndex + Ollama application significantly slower in offline Docker container vs online version with identical setup
Problem
I have two nearly identical Python applications using LlamaIndex + Ollama for document Q&A:
Online version: ~5 seconds response time
Offline version: ~18 seconds response time
FYI i am ...
0
votes
1
answer
172
views
Issue with adding to chroma db
I am trying to write nodes to chroma db, but after creating index main.py closes and nothing else happens. That is, after the index is created no message even appears in the logger. If I work with ...
0
votes
0
answers
61
views
Using llama-index with the deployed LLM
I wanted to make a web app that uses llama-index to answer queries using RAG from specific documents. I have locally set up Llama3.2-1B-instruct llm and using that locally to create indexes of the ...
0
votes
1
answer
76
views
TS LlamaIndex - No current context found Error
With Typescript Llamaindex I have:
const { stream, sendEvent } = workflow.createContext();
sendEvent(startEvent.with(input));
But I see error No current context found Error
Although I ...