Skip to content

add a way to "fine-tune" some params like top_k documents in retrieval #8

@lgabs

Description

@lgabs

There are several ways to improve the application performance in terms of correctness. They could be somehow categorized in these cases:

  • New architectures: with new ideas for the chain, new components, new strategies like standalone questions, preprocessing the textd before embeddings, changing embeddings etc
  • Better knowledge base: improving the data from which the model retrieves information. Sometimes you need to update an existent document, sometimes you need to add new documents.
  • Different parameters for the same architecture

The latter case works like tuning "hyperparameters", in the sense that, with a trained LLM model, we still define parameters that affects directly the application's performance. Some examples include:

  • when making embeddings, which parameters you should use to chunk your data (e.g. chunk size, chunk overlap in RecursiveCharacterTextSplitter)?
  • which minimum value of top_k is enough to achieve good correctness when retrieving top k documents?

For these cases, one can compile several chains, one for each group of parameters, and run evaluations to compare them. To help the developer in this process, we can offer a suite that receives a list of chains, a evaluator, and runs evaluations over the chains to compare the metrics.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions