Building a multi-strategy RAG workflow with reflection in LlamaIndex

Поділитися
Вставка
  • Опубліковано 11 гру 2024

КОМЕНТАРІ • 8

  •  Місяць тому

    Yes, thanks for this video. Well done and instructive :)

  • @bradhatch8302
    @bradhatch8302 2 місяці тому +1

    This example is exactly what I needed to take my problem to the next level. Thank you!

  • @sritharan20
    @sritharan20 3 місяці тому +3

    Awesome 🎉 I’m working on this problem right now, helps a lot

  • @robertobarroso5439
    @robertobarroso5439 3 місяці тому

    These videos are gold. Such useful content keep it up ! Thanks

  • @AmeeliaK
    @AmeeliaK 2 місяці тому +1

    This is great! How can I print which chunks or documents have been in the top 20 that the High Top K Strategy returned?

    • @AmeeliaK
      @AmeeliaK Місяць тому

      It's in response.metadata

  • @Htyagi1998
    @Htyagi1998 3 місяці тому +1

    That's really cool

  • @global922
    @global922 3 місяці тому +1

    how much video card do we all need for local run with 1 sec responses with ollama and run locally. changing the llms.OpenAi in llms.Ollama . Need embeding settings and llm settings. and change the OpenAi ctx.data["llm"] = Ollama.. it does take as params model the local stored and temperature that 0.1 gives good output. Thanks for the video and have fun . Making and experiment thanks