Skip to main content
1 vote
1 answer
102 views

I'm working with LlamaIndex and trying to run LLM in local using Ollama. I pulled phi3:mini model successfully and tested it in terminal. but if I load it with llama_index.llms.ollama, it keeps ...
Kemp's user avatar
  • 13
5 votes
0 answers
57 views

I am trying to use the class AirbyteSalesforceReader from the project llama-index-readers-airbyte-salesforce (version 0.4.1). However, when I do so, e.g. in the following example: from llama_index....
Henrik Nørgaard's user avatar
1 vote
1 answer
50 views

I am using LlamaIndex's MetadataFilters to apply a filter to my ChromaDB VectorStoreIndex as a query engine. I am able to set multiple filters if they are using the same FilterCondition but how would ...
NFPortal's user avatar
0 votes
0 answers
27 views

I'm using LlamaIndex 0.14.7. I would like to embed document text without concatenating metadata, because I put a long text in metadata. Here's my code: table_vec_store: SimpleVectorStore = ...
Trams's user avatar
  • 421
0 votes
0 answers
49 views

I modified the example from the LlamaIndex documentation: Single Agent Workflow Example to work with a local LLM using the @llamaindex/ollama adapter package. import { tool } from 'llamaindex'; import ...
user3414982's user avatar
0 votes
0 answers
32 views

I have an ai agent made with Python Llamaindex, FastAPI and SqlModel. I want to log the ToolCall in "search_documents" function but it never works. The only problem is that ToolCall is not ...
Joe's user avatar
  • 23
1 vote
2 answers
455 views

For my particular project, it would be very helpful to know how many tokens the BGE-M3 embedding model would break a string down into before I embed the text. I could embed the string and count the ...
ManBearPigeon's user avatar
2 votes
0 answers
93 views

I have gpt oss 20b model's weights locally. What are the necessary steps to run a 20B model using transformers. in files that I downloaded is multi safetensor files. and also a .bin file. which one of ...
miky's user avatar
  • 21
0 votes
1 answer
270 views

I am trying to create a ReAct agent in LlamaIndex using a local gpt-oss-20b model. I have successfully loaded my local model using HuggingFaceLLM from llama_index.llms.huggingface and it seems to be ...
meysam's user avatar
  • 204
1 vote
1 answer
124 views

I’m working with LlamaIndex in Python and ran into an issue with metadata filtering. I have a TextNode that includes a metadata field explicitly set to None. When I try to retrieve it using a metadata ...
Gino's user avatar
  • 923
0 votes
0 answers
58 views

I've created an Agent using llama index. When I specify only one tool spec, it works correctly. However, when I try to use two, one is ignored. import asyncio import logging import os from dotenv ...
UserX's user avatar
  • 101
0 votes
0 answers
103 views

Problem I have two nearly identical Python applications using LlamaIndex + Ollama for document Q&A: Online version: ~5 seconds response time Offline version: ~18 seconds response time FYI i am ...
sai's user avatar
  • 1
0 votes
1 answer
172 views

I am trying to write nodes to chroma db, but after creating index main.py closes and nothing else happens. That is, after the index is created no message even appears in the logger. If I work with ...
Carl Brendt's user avatar
0 votes
0 answers
61 views

I wanted to make a web app that uses llama-index to answer queries using RAG from specific documents. I have locally set up Llama3.2-1B-instruct llm and using that locally to create indexes of the ...
Utkarsh's user avatar
0 votes
1 answer
76 views

With Typescript Llamaindex I have: const { stream, sendEvent } = workflow.createContext(); sendEvent(startEvent.with(input)); But I see error No current context found Error Although I ...
Roy Ganor's user avatar

15 30 50 per page
1
2 3 4 5
23