LoRA & QLoRA Fine-tuning Explained In-Depth

Поділитися
Вставка
  • Опубліковано 13 гру 2023
  • In this video, I dive into how LoRA works vs full-parameter fine-tuning, explain why QLoRA is a step up, and provide an in-depth look at the LoRA-specific hyperparameters: Rank, Alpha, and Dropout.
    0:26 - Why We Need Parameter-efficient Fine-tuning
    1:32 - Full-parameter Fine-tuning
    2:19 - LoRA Explanation
    6:29 - What should Rank be?
    8:04 - QLoRA and Rank Continued
    11:17 - Alpha Hyperparameter
    13:20 - Dropout Hyperparameter
    Ready to put it into practice? Try LoRA fine-tuning at www.entrypointai.com
  • Наука та технологія

КОМЕНТАРІ • 46

  • @DanielTompkinsGuitar
    @DanielTompkinsGuitar 3 місяці тому +11

    Thanks! This is among the clearest and most concise explanations of LoRA and QLoRA. Really great job.

  • @steve_wk
    @steve_wk 4 місяці тому +4

    I've watched a couple other of your videos - you're a very good teacher - thanks for doing this.

  • @user-os2rb3lx7h
    @user-os2rb3lx7h 4 місяці тому +1

    I have been using thiese techniques for a while now without having a good understanding of each of the prameters. Thanks for giving a good overview of both the techniques and the papers

  • @VerdonTrigance
    @VerdonTrigance 4 місяці тому +1

    It was incredible and very helpful video. Thank you man!

  • @naevan1
    @naevan1 16 днів тому

    I love this video man. watched it at least 3 times and came back to it before a job interview also. Please do more tutorials /explanations !

  • @drstrangeluv1680
    @drstrangeluv1680 Місяць тому

    I loved the explanation! Please make more such videos!

  • @SanjaySingh-gj2kq
    @SanjaySingh-gj2kq 5 місяців тому +1

    Good explanation of LoRA and QLoRA

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 місяці тому +1

    This is really well presented

  • @varun_skywalker
    @varun_skywalker 4 місяці тому +1

    This is really helpful, Thank you!!

  • @anujlahoty8022
    @anujlahoty8022 20 днів тому

    Loved the contnt! Simply explained no BS.

  • @SantoshGupta-jn1wn
    @SantoshGupta-jn1wn 3 місяці тому

    great video, i think the best explanation i've seen on this, i'm also really confused about why they picked the rank and alpha that they did.

  • @titusfx
    @titusfx 3 місяці тому +2

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 *Introduction to Low Rank Adaptation (LoRA) and QLoRA*
    - LoRA is a parameter-efficient fine-tuning method for large language models.
    - Explains the need for efficient fine-tuning in the training process of large language models.
    02:29 🛡️ *Challenges of Full Parameter Fine-Tuning*
    - Full parameter fine-tuning updates all model weights, requiring massive memory.
    - Limits fine-tuning to very large GPUs or GPU clusters due to memory constraints.
    04:19 💼 *How LoRA Solves the Memory Problem*
    - LoRA tracks changes to model weights instead of directly updating all parameters.
    - It uses rank-one matrices to efficiently calculate weight changes.
    06:11 🎯 *Choosing the Right Rank for LoRA*
    - Rank determines the precision of the final output table in LoRA fine-tuning.
    - For most tasks, rank can be set lower without sacrificing performance.
    08:12 🔍 *Introduction to Quantized LoRA (QLoRA)*
    - QLoRA is a quantized version of LoRA that reduces model size without losing precision.
    - It exploits the normal distribution of parameters to achieve compression and recovery.
    10:46 📈 *Hyperparameters in LoRA and QLoRA*
    - Discusses hyperparameters like rank, alpha, and dropout in LoRA and QLoRA.
    - The importance of training all layers and the relationship between alpha and rank.
    13:30 🧩 *Fine-Tuning with LoRA and QLoRA in Practice*
    - Emphasizes the need to experiment with hyperparameters based on your specific data.
    - Highlights the ease of using LoRA with integrations like Replicate and Gradient.

  • @louisrose7823
    @louisrose7823 Місяць тому

    Great video!

  • @stutters3772
    @stutters3772 17 днів тому

    This video deserves more likes

  • @markironmonger223
    @markironmonger223 5 місяців тому

    This was wonderfully educational and very easy to follow. That either it makes you a great educator or me an idiot :P Regardless, thank you.

    • @EntryPointAI
      @EntryPointAI  5 місяців тому +1

      let's both say it's the former and call it good! 🤣

  • @RafaelPierre-vo2rq
    @RafaelPierre-vo2rq Місяць тому

    Awesome explanation! Which camera you use?

  • @nafassaadat8326
    @nafassaadat8326 3 дні тому

    can we use QLoRA in a simple ML model like CNN for image classification ?

  • @SergieArizandieta
    @SergieArizandieta Місяць тому

    wow I'm noobie in this field n I been testing fine-tunen my own chatbot with differents techniques, n I found a lot of stuff, but It's not commonly find a some explanation to understand the main reason of the use of it, ty a lot < 3

  • @YLprime
    @YLprime Місяць тому +4

    Dude u look like the lich king with those blue eyes

    • @practicemail3227
      @practicemail3227 23 дні тому

      True. 😅 He should be in acting career ig.

    • @EntryPointAI
      @EntryPointAI  19 днів тому

      You mean Lich King looks like me I think 🤪

  • @TheBojda
    @TheBojda Місяць тому

    Nice video, congrats! LoRA is about fine-tuning, but is it possible to use it to compress the original matrices to speed up inference? I mean decompose the original model's original weight matrices to products of low-rank matrices to reduce the number of weights.

    • @rishiktiwari
      @rishiktiwari Місяць тому +1

      I think you mean distillation with quantisation?

    • @EntryPointAI
      @EntryPointAI  Місяць тому +1

      Seems worth looking into, but I couldn't give you a definitive answer on what the pros/cons would be. Intuitively I would expect it could reduce the memory footprint but that it wouldn't be any faster.

    • @TheBojda
      @TheBojda Місяць тому +1

      @@rishiktiwari Ty. I learned something new. :) If I understand well, this is a form of distillation.

    • @rishiktiwari
      @rishiktiwari Місяць тому

      ​@@TheBojdaCheers mate! Yes, in distillation there is student-teacher configuration and the student tries to be like teacher with less parameters (aka. weights). This can also be combined with quantisation to reduce memory footprint.

  • @kunalnikam9112
    @kunalnikam9112 24 дні тому

    In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible

    • @EntryPointAI
      @EntryPointAI  17 днів тому +1

      Wo would be the original model parameters. A and B multiplied together represent the changes to the original parameters learned from your fine-tuning. So together they represent the difference between your final fine-tuned model parameters and the original model parameters. Individually A and B don't represent anything, they are just intermediate stores of data that save memory.

    • @kunalnikam9112
      @kunalnikam9112 16 днів тому

      @@EntryPointAI got it!! Thank you

  • @ArunkumarMTamil
    @ArunkumarMTamil 13 днів тому

    how is Lora fine-tuning track changes from creating two decomposition matrix?

    • @EntryPointAI
      @EntryPointAI  9 днів тому +1

      The matrices are multiplied together and the result is the changes to the LLM's weights. It should be explained clearly in the video, it may help to rewatch.

    • @ArunkumarMTamil
      @ArunkumarMTamil 9 днів тому

      @EntryPointAI
      My understanding:
      Orignal weight = 10 * 10
      to form a two decomposed matrices A and B
      let's take the rank as 1 so, The A is 10 * 1
      and B is 1 * 10
      total trainable parameters is A + B = 20
      In Lora even without any dataset training if we simply add the A and B matrices with original matric we can improve the accuracy slighty
      And if we use custom dataset in Lora the custom dataset matrices will captured by A and B matrices
      Am I right @EntryPointAI?

    • @EntryPointAI
      @EntryPointAI  5 днів тому +1

      @@ArunkumarMTamil Trainable parameters math looks right. But these decomposed matrices will be initialized as all zeroes so adding them without any custom training dataset will have no effect.

  • @ecotts
    @ecotts Місяць тому

    LoRa (Long Range) is a physical proprietary radio communication technique that uses a spread spectrum modulation technique derived from chirp spread spectrum. It's a low powered wireless platform that has become the de facto wireless platform of Internet of Things (IoT). Get your own acronym! 😂

    • @EntryPointAI
      @EntryPointAI  Місяць тому

      Fair - didn’t create it, just explaining it 😂

  • @vediodiary1754
    @vediodiary1754 2 місяці тому

    Oh my god your eyes 😍😍😍😍everybody deserves hot teacher😂❤

  • @Ian-fo9vh
    @Ian-fo9vh 4 місяці тому +1

    Bright eyes

  • @nabereon
    @nabereon 3 місяці тому

    Are you trying to hypnotize us with those eyes 😜

  • @DrJaneLuciferian
    @DrJaneLuciferian 3 місяці тому

    I wish people would actually share links to papers they reference...

    • @EntryPointAI
      @EntryPointAI  3 місяці тому +2

      LoRA: arxiv.org/abs/2106.09685
      QLoRA: arxiv.org/abs/2305.14314
      Click "Download PDF" in top right to view the actual papers.

    • @DrJaneLuciferian
      @DrJaneLuciferian 3 місяці тому

      @@EntryPointAI Thank you, that's kind. I did already go look it up. Sorry I was frustrated. It's very common for people to forget to putlikes to papers in show note :^)

  • @TR-707
    @TR-707 4 місяці тому

    Ahh very interesting thank you!
    *goes to fine tune pictures of anime girls*

  • @coco-ge4xg
    @coco-ge4xg 2 дні тому

    omg I always distracted by his blue eyes😆and ignoring what his talking