Building with Instruction-Tuned LLMs: A Step-by-Step Guide

Поділитися
Вставка
  • Опубліковано 17 вер 2024

КОМЕНТАРІ • 58

  • @RaymonddeLacaze
    @RaymonddeLacaze Рік тому +17

    That was an excellent presentation. I feel like I learned a lot. I am frequently disappointed by these 1-hr webinars. I really appreciated the way both of you complemented each other. It was great to get the top-level view and Chris did a great job of walking through the code. He moved understandably a bit fast so it was hard to ingest all the code which is normal, and then I really appreciated Greg giving a recap and the take always of what Chris had demoed. It really helped me retain something constructive the code demo. All in all I think you both did a great job. Thank you for doing this. I would love to get a copy of the slides and the code that was demoed to walk through it at my own pace and try it out.
    Will you guys be making the slides and code available?

    • @Deeplearningai
      @Deeplearningai  Рік тому +1

      We'll be following up with the slides!

    • @Jyovita1047316
      @Jyovita1047316 Рік тому

      @@Deeplearningai when?

    • @lysanderAI
      @lysanderAI Рік тому +1

      you can find a link to the slides in the chat around the 45 min mark in the video.

  • @steveking5858
    @steveking5858 Рік тому +1

    Great session. Really helps in starting to understand the key building blocks and considerations required to do model fine-tuning. Great job Chris and Greg - and thanks!

  • @redfield126
    @redfield126 Рік тому +1

    This is a very very educational content. I found most of all my main questions answered. Fantastic wrap up. Thank you guys

  • @prizmaweb
    @prizmaweb Рік тому +2

    Outstanding! I was looking around for exactly this for the last week.

  • @chukypedro818
    @chukypedro818 Рік тому +1

    Awesome Webinar.
    Thanks Chris an Greg

  • @archit_singh15
    @archit_singh15 9 місяців тому

    such excellent explanations, perfect understanding achieved ! thanks

  • @wangsherpa2801
    @wangsherpa2801 Рік тому +1

    Excellent session, thanks!

  • @fabianaltendorfer11
    @fabianaltendorfer11 Рік тому

    Love the energy. Thanks for the session!

  • @fal57
    @fal57 Рік тому

    Thank you so much; you've made the idea very simple.

  • @membershipyuji
    @membershipyuji Рік тому +2

    The session was great and informative. For the 2nd part, I would like to see inference results before fine-tuning as well. The bloomz is instructed-tuned already and might be good at writing marketing email even before feeding 16 examples.

    • @temp_temp3183
      @temp_temp3183 Рік тому +3

      100% agree, it wasn't clear what was the value add with unsupervised training.

    • @chrisalexiuk
      @chrisalexiuk Рік тому

      Great question!
      If you load up the model without the fine-tuning, you will see that it does "fine" on the task - but that it doesn't achieve the same "style" as we're training with the unsupervised fine-tuning. You can imagine it as more of an extension of pre-training - which uses a largely unsupervised process.

  • @llohannsperanca
    @llohannsperanca Рік тому +3

    Dears, great presentation! Thank you very much!
    I wonder where the material will be available?

  • @anujanand6
    @anujanand6 Рік тому +1

    That was a great presentation! Brief yet clear and to the point!
    I have a question on the make_inference function - based on the code, both the outputs (the good and bad) seem to be coming from the same fine tuned model. In the inference function, the good outputs are from 'base_model' and bad outputs are from 'model'. But the base_model is the model that was finetuned and pushed to the hub, and later we import that model and store it in the object 'model'. The only difference seems to be that the max_new_tokens is lesser when predicting the bad outputs. Please correct me if I'm wrong. Thanks!

  • @MS-en7el
    @MS-en7el Рік тому +3

    Hi! Thank you for the valuable content! Although I still have a question.
    @chrisalexiuk Do I correctly assume that in both cases (instruct tuning and "unsupervised" fine-tuning) the model during the training (tuning) phase actually performs the next token prediction task and calculates loss based on that (as in typical autoregressive training of decoder) ? My point is that in both cases we simply create the text input in different formats (e.g., input combined with response [or target] in the first case) and we pass it through the base model. Is there any crucial "technical" difference underneath in presented cases?

  • @seyedmohammadseyedmahdi8913
    @seyedmohammadseyedmahdi8913 Рік тому +1

    thanks!

  • @seulkeelee4655
    @seulkeelee4655 Рік тому +2

    Thanks for the great session! Only one question... I tried the supervised instruct-tuning exactly the same. After the training is complete, I tried to push the model to the hub. But I got an error message: "NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported." But you seemed to have no issue with pushing. Do you have any insight? Any advice? Thank you!

  • @amortalbeing
    @amortalbeing 7 місяців тому

    Thanks a lot. really appreciate it. To what extend quantizing affects the training? or the output of the model in terms of the generation capabilities? does it dumb it down ? does it affect the loss?

  • @MauricioGomez-e9e
    @MauricioGomez-e9e Рік тому +1

    magnifico

  • @fox_trot
    @fox_trot Рік тому +7

    Will you guys be making the slides and code available?

  • @MrLahcenDZ
    @MrLahcenDZ Рік тому +2

    I think there's an error in the formatting_func, it's not example.get("input", "") but example.get("context", "") since the key I assume is context.. so in your case the function will always go to the else part, and all the data will be formatted as only instruction and response, never with a context, or maybe I'm missing something..

    • @chrisalexiuk
      @chrisalexiuk Рік тому +3

      Excellent catch! This is a relic of trying it out on a few different datasets! It should be updated as of now.

  • @pec8377
    @pec8377 11 місяців тому

    Your first model is repeating itself, does tons of weird things. What would you do in order to correct this ? More steps ? Larger dataset ?

  • @bhaveshsethi6876
    @bhaveshsethi6876 Рік тому +2

    @chris Alexiuk how did you push 4 bit model to hugging face

    • @weizhili6732
      @weizhili6732 Рік тому

      I got the same error today: NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported

    • @bhaveshsethi6876
      @bhaveshsethi6876 Рік тому

      ​@@weizhili6732I had searched it it can't be saved and loading the 4 bit require more GPU so go with the same process again and again

    • @chrisalexiuk
      @chrisalexiuk Рік тому +1

      Hey!
      You'll just want to save and push the adapter - I'll modify the notebook to a format that includes that process. You can expect that to be push tomorrow!

  • @androide551
    @androide551 Рік тому +2

    wen slides sir?

  • @akibulhaque8621
    @akibulhaque8621 8 місяців тому

    For the supervised instruction set can i use any model? Like a Lliama 2 base model and train it?

  • @karrtikiyer1987
    @karrtikiyer1987 Рік тому

    Thanks for the nice tutorial. How do you create a custom dataset for the second part (single task unsupervised learning)? Say I have bunch of documents, is there some framework or library available to create a single task dataset for unsupervised learning?

  • @ashsha-y5f
    @ashsha-y5f Рік тому

    @chris - I wanted to finetune llama model on my mac M1but it seems bitsandbytes does not have package for Apple silicon yet. Any suggestions ?

  • @amortalbeing
    @amortalbeing 7 місяців тому

    Where are the slides? I want to read the paper suggested at 30:10 .
    What am I supposed to do ?
    Thanks a lot in advance

  • @ChiliJ
    @ChiliJ Рік тому +1

    If I'm looking to teach the LLM a new programming language, should I go with instruction tuning or fine tuning?

    • @chrisalexiuk
      @chrisalexiuk Рік тому +1

      Fine-tuning will likely have the best results!

    • @ChiliJ
      @ChiliJ Рік тому

      @@chrisalexiuk thank you for being responsive. Got to check out your channel as well. Very informative!

  • @ashwinrachha1694
    @ashwinrachha1694 Рік тому +2

    I tried Intruction-Tuning on a custom dataset and got this error:
    ValueError: num_samples should be a positive integer value, but got num_samples=0

    • @chrisalexiuk
      @chrisalexiuk Рік тому

      This has been corrected in the notebook now, there were some changes to the libraries that cause a few errors.

    • @ShyamSunderKumarNITD
      @ShyamSunderKumarNITD Рік тому +4

      @@chrisalexiuk From where i can access the notebook.

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox Рік тому

    I get this error when I try to push it hub - NotImplementedError: You are calling `save_pretrained` on a 4-bit converted model. This is currently not supported . Has anyone seen this error ?

  • @Pouya..
    @Pouya.. 5 місяців тому

    are these notebooks available?

  • @paparaoveeragandham284
    @paparaoveeragandham284 4 місяці тому

    Look it

  • @prayagpurohit148
    @prayagpurohit148 11 місяців тому

    Hey guys, I come from a non-data-science background and trying to automate a task. I want to automate the task of giving feedback to students. Howver, I am having a hard time coming up the logic for fine tuning. If anyone is interested in helping me out (please do), please reply to this comment. I'll give you more context about the problem if you decide to help me (please do)

  • @fintech1378
    @fintech1378 10 місяців тому

    i finetuned Llama 2 on colab but it says CUDA run out of memory, what is the problem here? cuz video says its possible

  • @ashishsharma-fy7ox
    @ashishsharma-fy7ox Рік тому +1

    I am using openlm-research/open_llama_7b_v2 . The training starts with loss around 1.26 and after 5K steps , the loss goes down to 1.02 . I am not sure why the numbers are so different from the presentation and model is learning very slowly. Any suggestions ?

  • @EXPERIMENTGPT
    @EXPERIMENTGPT Рік тому +1

    @Chris Alexiuk I am getting this WARNING:accelerate.utils.modeling:The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.

    • @chrisalexiuk
      @chrisalexiuk Рік тому

      On which notebook is this occuring?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT Рік тому +1

      @@chrisalexiuk google colab

    • @chrisalexiuk
      @chrisalexiuk Рік тому

      @@EXPERIMENTGPT Is it in the Supervised Fine-tuning notebook?

    • @EXPERIMENTGPT
      @EXPERIMENTGPT Рік тому +1

      @@chrisalexiuk yes sir

    • @chrisalexiuk
      @chrisalexiuk Рік тому

      @@EXPERIMENTGPT Hey! Sorry for the late reply, I don't wind up getting notifications on these comments: I didn't encounter this issue - could you send me a copy of your notebook?