Run LLMs without GPUs | local-llm

Поділитися
Вставка
  • Опубліковано 17 жов 2024

КОМЕНТАРІ • 7

  • @OsamaAltobuli
    @OsamaAltobuli 19 днів тому +1

    Thanks for sharing this.
    What the performance compared to the GPU based machines?

  • @armanshirzad4409
    @armanshirzad4409 4 місяці тому

    great thanks! so question can i use this to run a downloaded llm , instead of accessing hugging face ?

  • @rakeshreddy2791
    @rakeshreddy2791 5 місяців тому

    hey rishab great video.. can we fine tune the model using local-llm??

  • @Heet10
    @Heet10 5 місяців тому

    Hello Sir,
    Thank you for reading my message
    Sir, I just finished my UG(BSc.IT) and I'm interested in the Cloud Computing Field as a fresher should I start preparing for DevOps or Cloud engineering to land a Job in Cloud Computing as soon as possible, And any advice that you would help me in career growth wise.

  • @rishabhjain5100
    @rishabhjain5100 5 місяців тому +1

    It’s only for the inferencing ??

    • @armantech5926
      @armantech5926 4 місяці тому

      Yes, I also thought that is not real local LLM. It's inference.

  • @bradkeane1246
    @bradkeane1246 Місяць тому +1

    can you talk to the models in english and have them answer you? you did not even demonstrate this?