Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ • 68

  • @venelin_valkov
    @venelin_valkov  Рік тому +7

    Full text tutorial (requires MLExpert Pro): www.mlexpert.io/prompt-engineering/fine-tuning-llama-2-on-custom-dataset

    • @kiranatmakuri3710
      @kiranatmakuri3710 Рік тому

      Can you send me your email pls I have a question can’t ask in public

    • @williamfussell1956
      @williamfussell1956 Рік тому +2

      I keep having problems with the model.merge_and_unload()...
      It seems to be a bit different from the documentation on Hugging Face...is there something I am missing here?
      The error says that that 'LlamaForCausalLM' object has no attribute 'merge_and_unload'....
      Any ideas?

    • @pablomeza5932
      @pablomeza5932 Рік тому

      @@williamfussell1956 Did you fix that?

    • @fl028
      @fl028 9 місяців тому

      @@williamfussell1956 merged_model = trained_model.merge_and_unload()

    • @lotfiamr8433
      @lotfiamr8433 2 місяці тому

      @@pablomeza5932 did you fix it please ?

  • @vivekjyotibhowmik8008
    @vivekjyotibhowmik8008 Рік тому +28

    Can you provide the Google Collab notebook?

  • @christopherbader2222
    @christopherbader2222 Рік тому +4

    This is great. A version for question answering would be helpful too.

  • @echos01
    @echos01 Рік тому +2

    Excellent work! You are the hero!

  • @fabsync
    @fabsync 7 місяців тому

    Fantastic video! It will be nice to see a full tutorial on how to do it with pdf locally...

  • @williamgomezsantana
    @williamgomezsantana 9 місяців тому +2

    Incredible video!! Thank you very much, I have a question: isn't it mandatory to put characters like EOS at the end of the summary? for the LLM to finish the instruction?

  • @stawils
    @stawils Рік тому +2

    Good stuff coming, thank you in advance ❤

  • @GregMatoga
    @GregMatoga Рік тому +5

    Thank you for this! Is finetuning a good approach for a private/proprietary documentation Q&A?

  • @krishchatterjee2819
    @krishchatterjee2819 Рік тому +4

    Excellent video! What changes in the input we need to make to use 8 bit quantization instead of 4 bit. Thanks.

  • @HeywardLiu
    @HeywardLiu Рік тому +1

    Awesome tutorial!

  • @VaibhavPatil-rx7pc
    @VaibhavPatil-rx7pc Рік тому +2

    Super excited

  • @NitroBrewbell
    @NitroBrewbell Рік тому +1

    very helpful. Thanks for the videos.

  • @AbdulBasit-ff6tq
    @AbdulBasit-ff6tq Рік тому +4

    Do you have or plan to make a tutorial for something like bellow?
    Tutorial for the plane text fine-tuning and then tuning that model to make it an instruct tuned one?

  • @parisapouya6716
    @parisapouya6716 Рік тому +1

    Awesome work! Thanks a ton!

  • @ikurious
    @ikurious Рік тому +3

    Great video!
    Is there anyway to build my instruction dataset for instruct fine-tuning from classical text books?

    • @ikurious
      @ikurious Рік тому

      @@user-xt6tu3xt3t but then how to convert in question & answer format?

    • @mauriciososa9722
      @mauriciososa9722 Рік тому

      @@ikurious the best way is manualyl by a human

  • @techtraversal219
    @techtraversal219 Рік тому +1

    Thanks for sharing, really helpful. Waiting for my Llama model access to follow it step by step. Can I use any other model in place of this?

    • @srushtiharyan2033
      @srushtiharyan2033 10 місяців тому

      Did you get the access? And how long did it take?

  • @experiment5762
    @experiment5762 Рік тому

    Great!! Do some videos regarding RLHF.

  • @jensonjoy83
    @jensonjoy83 Рік тому +1

    will you be able to add a tutorial for llama2-chat model

  • @lyovazi8533
    @lyovazi8533 Рік тому +1

    very good video

  • @tahahuraibb5833
    @tahahuraibb5833 Рік тому +2

    default_factory=lambda: ["q_proj", "v_proj"] Why did you not add this? Is it because HF does under the hood?

    • @venelin_valkov
      @venelin_valkov  Рік тому +1

      I totally forgot about the `target_modules`. I retrained and updated the notebook/tutorial with those. The results are better!
      Here's the list:
      lora_target_modules = [
      "q_proj",
      "up_proj",
      "o_proj",
      "k_proj",
      "down_proj",
      "gate_proj",
      "v_proj",
      ]
      I composed it from here: github.com/huggingface/transformers/blob/f6301b9a13b8467d1f88a6f419d76aefa15bd9b8/src/transformers/models/llama/convert_llama_weights_to_hf.py#L144
      Thank you!

    • @williamfussell1956
      @williamfussell1956 Рік тому

      Is there a good resource for understanding 'target modules' for different models? @@venelin_valkov

  • @tarunku9378
    @tarunku9378 Рік тому +1

    I still don't get it i have my data locally , how should start finetuning it please tell

  • @shopbc5553
    @shopbc5553 Рік тому +1

    Do you have an idea how GPT4 is so good with its responses from its base model when I upload documents to it?
    Could it be the parameter. size only or do you think other technologies are what determine the quality difference?

    • @tillwill3232
      @tillwill3232 6 місяців тому

      parameter size and training data i guess? Also I dont think we know their exact network architecture since they didnt release their network publicly, can only access it via product

  • @williamfussell1956
    @williamfussell1956 Рік тому

    Hi there, I am just reading through the repo and Im pretty sure this is the answer...i just wanted to make sure...
    The actual input to the model is only from the [text] field, is that correct? As the [text] field contains the prompt, the conversation and the summary...

  • @elysiryuu
    @elysiryuu 9 місяців тому

    Thanks for the insight, is it possible to perform training locally, with 8 GB VRAM?

  • @cancheers
    @cancheers Рік тому +2

    should it be merged_model = trained_model.merge_and_unload()?
    cannot run, it is killed

    • @rone3243
      @rone3243 Рік тому

      I have this problem as well😢

    • @kpratik41
      @kpratik41 Рік тому

      Were you able to solve this?

    • @fl028
      @fl028 9 місяців тому +1

      merged_model = trained_model.merge_and_unload()

    • @lotfiamr8433
      @lotfiamr8433 2 місяці тому

      @@rone3243 did you resolve it please ?

    • @lotfiamr8433
      @lotfiamr8433 2 місяці тому

      @@fl028 did you resolve it please ?

  • @AbhisekPanigrahi-k8m
    @AbhisekPanigrahi-k8m 11 місяців тому +1

    Hola, For me the validation log show No log with mistral instruct model. Please help anyone.

  • @GooBello-gr2ls
    @GooBello-gr2ls Рік тому +1

    can i download the finetuned model after finetuning?
    is it in format .bin or .safetensor or else?
    cuz im current trying to do finetuning on textgen, but having troubles. with dataset (format) i guess.

    • @lisab1360
      @lisab1360 Рік тому

      do you already know how you can download the finetuned model?

  • @DawnWillTurn
    @DawnWillTurn Рік тому +1

    Any idea how can we deploy llama-2 on huggingface api? just like the falcon one, has some issue with the handler.

  • @vitocorleon6753
    @vitocorleon6753 Рік тому +1

    I need help please. I just want to be pointed in the right direction since I'm new to this and since I couldn't really find any proper guide to summarize the steps for what I want to accomplish.
    I want to integrate a LLama 2 70B chatbot into my website. I have no idea where to start. I looked into setting up the environment on one of my cloud servers(Has to be private). Now I'm looking into training/fine-tuneing the chat model using our data from our DBs(It's not clear for me here but I assume it involves two steps, first I have to have the data in a CSV format since it's easier for me, second I will need to format it in Alpaca or Openassistant formats). After that, the result should be a deployment-ready model ?
    Just bullet points I'd highly appreciate that.

    • @vitocorleon6753
      @vitocorleon6753 Рік тому

      @nty3929 Oh :/ I’m still lost about this but thank you for your effort nevertheless!

    • @GregMatoga
      @GregMatoga Рік тому

      @nty3929 Yeah, bots are ruthless here and youtube is having none of it, even at that cost. Guess they expect to see more technical conversations elsewhere

  • @abhayurmaliya6801
    @abhayurmaliya6801 Місяць тому

    Hello Sir !!
    I have finetuned llama model and now want to use the model for deployment using flask how to do it because when I try to run the flask the problem occur is that it start downloading the base model which it require to load the finetuned model. Is there a possible way to store tha model like we do in ML and then can use this. Without taking much time.
    Please tell me

  • @sasukeuchiha-ck4hy
    @sasukeuchiha-ck4hy Рік тому +1

    can you train the model on german data?

  • @karimbaig8573
    @karimbaig8573 Рік тому

    When you say you are tracking loss, what loss is that and how is that loss calculated for the task (summarization) at hand?

    • @anuranjankumar2904
      @anuranjankumar2904 11 місяців тому

      I have the same question. @karimbaig8573 were you able to figure out the answer?

    • @karimbaig8573
      @karimbaig8573 11 місяців тому

      Nope.

  • @MecchaKakkoi
    @MecchaKakkoi 9 місяців тому

    This looks like a great notebook, however, I always get a "CUDA out of memory" error when it executes the SFTTrainer function. It's fine up until then according to nvidia-smi but then memory just instantly maxes out. Does anyone know a way around this?

    • @rishabjain9275
      @rishabjain9275 9 місяців тому +1

      try reducing the sequence length

    • @fl028
      @fl028 9 місяців тому +1

      I reduced per_device_train_batch_size=1,

    • @lotfiamr8433
      @lotfiamr8433 2 місяці тому

      hello it's works for you ? i have the same error !!

  • @okopyl
    @okopyl Рік тому

    Why do you use that kind of prompt for the training like `### Instruction`? When in fact Llama 2 prompts are like `[INST] `...

    • @g1rlss1mp
      @g1rlss1mp Рік тому

      I think it's a LLaMA2-CHAT prompt. The base model was not finetuned.

  • @chukypedro818
    @chukypedro818 Рік тому +1

    Super🎉

  • @danieladama8105
    @danieladama8105 Рік тому +1

    🔥

  • @SahlEbrahim
    @SahlEbrahim 4 місяці тому

    anyoneelse have the issue with loading dataset????

  • @skahler
    @skahler Рік тому

    omg @ 15:06 😂😂😂

  • @JeeneyAI
    @JeeneyAI 9 місяців тому +3

    ALL of these tutorials require more dependencies. Can't somebody post how to do this in pycharm with your own GPU? I can't make any of the tutorials I've found work and it's just an endless troubleshooting process as to why everything is different in all of them