🦙 LLAMA-2 : EASIET WAY To FINE-TUNE ON YOUR DATA Using Reinforcement Learning with Human Feedback 🙌

Поділитися
Вставка
  • Опубліковано 3 лип 2024
  • In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using reinforcement learning with human feed back to train our llama, which will accelerate it performance.
    This technique is how this model are trained and in this video we will see, how to finetune this llm.
    Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
    Free Google Colab for 4bit QLoRA fine-tuning of llama-2-7b model
    Rise and Rejoice - Fine-tuning Llama 2 made easier with this Google Colab Tutorial
    ✍️Learn and write the code along with me.
    🙏The hand promises that if you subscribe to the channel and like this video, it will release more tutorial videos.
    👐I look forward to seeing you in future videos
    Links:
    Dataset to train: huggingface.co/datasets/Carpe...
    Reward_dataset:
    huggingface.co/datasets/Carpe...
    Second Part:
    • 🐐Llama 3 Fine-Tune wit...
    github.com/ashishjamarkattel/...
    #llama #finetune #llama2 #artificialintelligence #tutorial #stepbystep #llm #largelanguagemodels #largelanguagemodels
  • Наука та технологія

КОМЕНТАРІ • 50

  • @rabin1620
    @rabin1620 11 місяців тому +1

    Loved it. definitely Gona try. Waiting for next video.

  • @ahmedoumar3741
    @ahmedoumar3741 4 місяці тому +1

    Nice video, thanks!

  • @fc4ugaming359
    @fc4ugaming359 3 місяці тому

    Hello Brother Your video is so much informatic and its cover each and every part like theories. Their coding is their explanations there. But can you possible, like making one good demo, like you provide the one Paragraph and showing how to data work and how to generate output like a demo of model before and after

  • @zainab-fahim722
    @zainab-fahim722 8 місяців тому +1

    can you please link the research paper showed at the beginning of the video? Thanks!
    PS: great video! keep up the amazing work

    • @WhisperingAI
      @WhisperingAI  8 місяців тому

      arxiv.org/abs/1909.08593 here it is. Sorry for the late reply was quite busy

  • @emrahe468
    @emrahe468 11 місяців тому +1

    This is a good one , but our custom collected data doesnt have positive / negative columns. It maybe nice if you could a make video about:
    how to create a custom finetuning database for llama2 (without negative and positives), but also how to use it. None of the videos on the internet focusing on the step 2. They just build the database with autotrain and do nothing after

    • @minjunpark6613
      @minjunpark6613 11 місяців тому +2

      This video is specifically for RLHF, which requires the po/ne data. If you want to fine-tune without po/ne, you may search for sth like 'LORA'.

    • @emrahe468
      @emrahe468 11 місяців тому +1

      @@minjunpark6613 ty, after some more thinking, i may convert the database fit for RLHF :)

  • @user-cr1sk9fq6o
    @user-cr1sk9fq6o 11 місяців тому +1

    Look forward to watching new videos on this topic. When will you upload the new video?

    • @WhisperingAI
      @WhisperingAI  11 місяців тому +1

      Thank you for the comment. If every thing goes good, there will be next video of rlhf tomorrow else saturday.

    • @user-cr1sk9fq6o
      @user-cr1sk9fq6o 11 місяців тому +1

      @@WhisperingAI How does it go? haha

    • @WhisperingAI
      @WhisperingAI  11 місяців тому +1

      Haha i will upload it today. Sorry for delay

    • @user-cr1sk9fq6o
      @user-cr1sk9fq6o 11 місяців тому

      @@WhisperingAI Nice. Hope everything goes well. Thanks!

  • @Paperstressed
    @Paperstressed 7 місяців тому +2

    dear sir i have a question during generation of the output in llm the tokenizer is not giving the same summary but giving the same text which was passed in prompt

    • @WhisperingAI
      @WhisperingAI  7 місяців тому

      This is the common problem while doing finetuning.
      Please check dataloader and try increasing context length. I will solve the issue

    • @Paperstressed
      @Paperstressed 7 місяців тому +1

      @@WhisperingAI sir are you talking about this max length
      data_path = "test_policy.parquet"
      train_dataset = TLDRDataset(
      data_path,
      tokenizer,
      "train",
      max_length=256,
      )

    • @WhisperingAI
      @WhisperingAI  7 місяців тому

      @@Paperstressed yes please increase it to higher 512 or 1024

  • @Cloudvenus666
    @Cloudvenus666 11 місяців тому +1

    Would you be able to share the notebook link as well?

    • @WhisperingAI
      @WhisperingAI  11 місяців тому

      Thank you for the comment. Since i have done the code in local, I will be only able to share the code after next video(which is after 2-3 day). Sorry for the inconvenience.

  • @Paperstressed
    @Paperstressed 7 місяців тому +1

    Dear sir can you teach us how to fine tune llm model for question answer

    • @WhisperingAI
      @WhisperingAI  7 місяців тому

      Sure i have this video, you can check
      Its step by step process without talking.
      ua-cam.com/video/FMd15f_rzGc/v-deo.html

  • @sauravmohanty3946
    @sauravmohanty3946 10 місяців тому +1

    Can i use Falcon model for this code ? any thing to keep in mind while using Falcon .

    • @WhisperingAI
      @WhisperingAI  10 місяців тому

      Yes you can use any model to do this. There is nothing to keep in mind if you follow the tutorial.

    • @sauravmohanty3946
      @sauravmohanty3946 10 місяців тому

      For the same use case as in the above tutorial?

    • @WhisperingAI
      @WhisperingAI  10 місяців тому

      @@sauravmohanty3946 yes

  • @fc4ugaming359
    @fc4ugaming359 2 місяці тому +1

    If I want to add humanu feedbackcan i do??
    and if Yes than How ??

    • @WhisperingAI
      @WhisperingAI  2 місяці тому

      Human feedback is the dataset that is created in step 1 and 2.
      So you can create your own dataset that matches that format to train all the 3 steps

    • @fc4ugaming359
      @fc4ugaming359 2 місяці тому

      @@WhisperingAI like in model traing time can i put some kind of feedback like some of Label selection during model training??

  • @_SHRUTIDAYAMA
    @_SHRUTIDAYAMA 7 місяців тому +1

    The colab code not worked... it shows " cuda" error in step 3... can you pls help

    • @WhisperingAI
      @WhisperingAI  7 місяців тому

      Can you please provide explaination about issue?

    • @_SHRUTIDAYAMA
      @_SHRUTIDAYAMA 7 місяців тому

      0%| | 0/3 [00:00

  • @Ryan-yj4sd
    @Ryan-yj4sd 11 місяців тому +1

    How to save and push model to hugging face?

    • @WhisperingAI
      @WhisperingAI  11 місяців тому

      Thanks for the comment. In that case you can just add an argument in the trainingargument as push_to_hub=True or just call trainer.push_to_hub() after training the model. Hope this help

    • @Ryan-yj4sd
      @Ryan-yj4sd 11 місяців тому +1

      @@WhisperingAI but that just pushes the adaptors though? You can’t do inference w that

    • @WhisperingAI
      @WhisperingAI  11 місяців тому

      I am not able to get you for which video you are talking about, but i guess you are using Lora and peft and defining your model via Peft, in that case you need to save the model via
      1.model.base_model.save_pretrained("/path_to_model")
      2. load the model
      3. model.push_to_hub()

  • @AleixPerdigo
    @AleixPerdigo 8 місяців тому

    Great video, I would like to create my own fine tuned model for my company. Could I contact you?

  • @brainybotnlp
    @brainybotnlp 11 місяців тому +1

    Great content. Can you please share the code?

    • @WhisperingAI
      @WhisperingAI  11 місяців тому

      Thank you for your comment, but i will be only able to share code after next video which i planned after 2-3 day which will be full tutorial and source code.
      Sorry for the trouble.

    • @brainybotnlp
      @brainybotnlp 11 місяців тому

      @@WhisperingAI for sure waiting

    • @ELDoradoEureka
      @ELDoradoEureka 9 місяців тому

      ​@@brainybotnlp
      colab.research.google.com/drive/1gAixKzPXCqjadh6KLsR5ZRUnb8VRvZl1?usp=sharing

  • @namantyagi6294
    @namantyagi6294 8 місяців тому +1

    Is 12 GB enough VRAM to fine-tune model with 4bit QLoRA ?

    • @WhisperingAI
      @WhisperingAI  8 місяців тому

      It might not be enough. As llama is 13 gb, loading it in 4bit decrease size by 4 time. So as per the calculation it
      Requires 4×(13/4)=13 gb of vram but might increase till 15 due to type of optimizer you are using.

    • @namantyagi6294
      @namantyagi6294 8 місяців тому

      is it possible to offload some part of memory requirement to cpu/ram during fine tuning?@@WhisperingAI

  • @HaroldKouadio-gj7uw
    @HaroldKouadio-gj7uw Місяць тому

    There is an error message when I try to install trl I don't know why I am stuck... Can I have your email? To discuss with you about this issue?

    • @WhisperingAI
      @WhisperingAI  Місяць тому

      Can you create raise issue in git ?