Mastering LLM Fine-Tuning with QLoRA: Quantization on a Single GPU + Code

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 4

  • @Researcher100
    @Researcher100 2 місяці тому +1

    "Basically", I really liked this tutorial! Does this setting work with a larger model, say 7B?

    • @analyticsCamp
      @analyticsCamp  2 місяці тому

      Thanks for watching! Should work with a 7b-model too if you have more VRAM, set the cuda to True in the training arguments, and keep the batch size lower. Hope it helps :)

  • @Lisa_Pfo
    @Lisa_Pfo 25 днів тому +1

    Appreciate author's efforts but it is super hard to understand why the author is in such a rush ? Instead of recording the tutorial for public, looks like author is running a marathon! Not aiming to disrespect but providing my critique for any future tutorials. Thank you!

    • @analyticsCamp
      @analyticsCamp  25 днів тому

      Thank you very much for your feedback. I'll definitely keep this in mind for future tutorials :)