LlamaIndex 22: Llama 3.1 Local RAG using Ollama | Python | LlamaIndex

Поділитися
Вставка
  • Опубліковано 13 гру 2024

КОМЕНТАРІ • 25

  • @TeamDman
    @TeamDman 3 дні тому

    Great overview, thanks!

  • @davidtindell950
    @davidtindell950 3 місяці тому +2

    Thank You ! Yet Again !! I did use Ollama 'llama3.1:8b' and it answered several queries quite well !

    • @StatsWire
      @StatsWire  3 місяці тому

      Thank you too! For sharing the feedback.

  • @davidtindell950
    @davidtindell950 3 місяці тому +1

    I confirmed that the query did indeed make use of my local NVidia GPU and so was fairly quick but not very fast !

    • @StatsWire
      @StatsWire  3 місяці тому

      Thanks for the feedback. I am creating a new video in that you can access all llama3.1 model using GroqCloud and the LLM runs in just one second. It is totally free to get the API key.

  • @pythonantole9892
    @pythonantole9892 4 місяці тому

    There's a step that i may have missed. Do you have the LLama 3 model installed on your local machine?

    • @StatsWire
      @StatsWire  4 місяці тому

      If you have Ollama installed then you can simply type 'ollama run llama3.1' in the cmd and it will start downloading automatically the Llama 3.1 in your system. It can take upto 4GB space.

  • @bossganabathivellow2702
    @bossganabathivellow2702 2 місяці тому +1

    I'm GEETING ERROR The `__modify_schema__` method is not supported in Pydantic v2. Use `__get_pydantic_json_schema__` instead in class `SecretStr`. How to solve it ?

    • @StatsWire
      @StatsWire  Місяць тому

      Are you still facing error?

  • @ackerj4402
    @ackerj4402 3 місяці тому

    Im having compatibility issues, could you share your python environment or your libraries' versions?

    • @StatsWire
      @StatsWire  3 місяці тому

      I am using Python 3.11.7

  • @Donovan-p7i
    @Donovan-p7i 2 місяці тому

    So when we execute llm = Ollama(model="llama3.1", request_timeout=420.0), is this mean that we need to deploy Ollama in local PC and pull llama3.1?

  • @shreniksonje4082
    @shreniksonje4082 Місяць тому

    Is there any chance to increase the speed of response without GPU?

    • @StatsWire
      @StatsWire  20 днів тому

      No, GPU is must or a good configuration machine.

  • @rahulsh5237
    @rahulsh5237 3 місяці тому

    Doew it work with pdf images of charts and tables

    • @StatsWire
      @StatsWire  3 місяці тому

      Yes, it works with all types of data.

  • @kashyapatom
    @kashyapatom 4 місяці тому

    I downloaded the model using ollama on my internet system but how to move the model files to intranet environment?Please help

    • @StatsWire
      @StatsWire  4 місяці тому

      This should not be a problem. You can use it.

  • @saraswathinatarajan1583
    @saraswathinatarajan1583 2 місяці тому

    Hii,
    I am not getting the response.
    Getting connection refused error.
    Please help to solve this

    • @StatsWire
      @StatsWire  2 місяці тому

      This can happen multiple times as the model is too large. Please try more or use groq cloud free api key to use llama 3.1 for free. I made a video on it recently.

  • @santoshjoshi3396
    @santoshjoshi3396 Місяць тому +1

    getting this error when querying :
    ValueError: Expected where to have exactly one operator, got {} in query."

    • @tamaskovats4206
      @tamaskovats4206 Місяць тому

      Unfortunately, I also received this error message!

    • @2314pedrin
      @2314pedrin Місяць тому

      The same error

    • @StatsWire
      @StatsWire  Місяць тому

      I am not sure if something changed. It worked for others earlier.