Finetuning Open-Source LLMs

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ • 22

  • @nadranaj
    @nadranaj 11 місяців тому +2

    Thanks

  • @Dom-zy1qy
    @Dom-zy1qy 7 місяців тому +1

    Very much appreciate this video, fine-tuning seemed like a somewhat amorphous concept to me for sometime, but the diagrams you showed really made it easier to understand how people finetune.

    • @SebastianRaschka
      @SebastianRaschka  7 місяців тому

      Thanks so much, glad these diagram were helpful and helped clarifying!

  • @kenchang3456
    @kenchang3456 6 місяців тому +1

    Thanks for sharing, especially about Lit-GPT (I'm always interested in more tutorials as my journey with fine-tuning and LLMs needs all the help it can get). Thanks again.

  • @Mayur7Garg
    @Mayur7Garg Рік тому +2

    One of the approaches I have experimented with, which is both manual labor, time and compute expensive but more reliable, is as follows:
    - Use a LLM to query for outputs. Use RAG and prompt engineering to get the best possible results.
    - Generate chat logs for each query. The log should include everything - the prompt, the retrieved info if any and the model output. Any special symbol such as to denote the system prompt or anything else should also be left in. This is because LLMs are text generation models with no concept of chat.
    - Manually update the model outputs to better reflect the expected output. This is a data creation task.
    - Fine tune a copy of the same LLM using PEFT using the updated chat logs.
    This can also be done iteratively as long the chat logs are generated initially by a model which hasn't been fine-tuned yet. Like a sort of A/B experiment. Some use cases are served the original model that generates the data for fine-tuning while the other are served the fine-tune model whose outputs are not used for any further fine-tuning.
    Expensive but over time, your model would work better for realistic inputs.

  • @mulderbm
    @mulderbm 6 місяців тому +1

    I recently listened to your latest videos. And now this one was recommended by perplexity for my specific use-case ;-) coincidence?

    • @SebastianRaschka
      @SebastianRaschka  6 місяців тому +1

      Haha, looks like LLMs are coming full circle here :D

  • @zjffdu
    @zjffdu 9 місяців тому +1

    Thanks for the video, very helpful for me to understand different kinds of finetunning. BTW, what kind of finetunnig is huggingface belong to?

    • @SebastianRaschka
      @SebastianRaschka  9 місяців тому

      Glad that it was helpful! HF itself has different tools for finetuning. Similarly, the LitGPT library I help developing supports full finetuning, LoRA, QLoRA, etc.

  • @captinbo1
    @captinbo1 Рік тому

    Thanks! Great overview

  • @mysticaltech
    @mysticaltech Рік тому

    Awesome, thank you!

  • @superfreiheit1
    @superfreiheit1 15 годин тому

    Plase Show us please how to create a dataset for finetuning, not just downloading it.

    • @SebastianRaschka
      @SebastianRaschka  11 годин тому

      I have some resources for that here: github.com/rasbt/LLMs-from-scratch/tree/main/ch07/05_dataset-generation

  • @ParthivShah
    @ParthivShah 3 місяці тому

    Nice Video.

  • @muhammadanas7698
    @muhammadanas7698 Рік тому

    Time saw you here on YT! Hope you remember me.!

  • @prakhargurawa
    @prakhargurawa 11 місяців тому

    Thank you :)

  • @lalmuansangachhakchhuak4927

    Cool

  • @PtYt24
    @PtYt24 5 місяців тому +1

    I really wish people would stop putting their x link and start sharing something like mastadon or threads, as a free user, x is where u go to feel second class citizen.

    • @SebastianRaschka
      @SebastianRaschka  5 місяців тому +1

      I hear you. On that note, I do have Threads and Mastodon accounts 😅. Just not using them much, somehow all the AI folks are still on X :(. I think the days of this type of social media are counted ...

    • @PtYt24
      @PtYt24 5 місяців тому +1

      @@SebastianRaschka Haha, I get it. I feel in the topic of "All the AI folks are sill ON X" is somewhere the buck starts with you problem, if more people start sharing it will eventually move there I guess.

  • @mohammadkad
    @mohammadkad Рік тому

    Amazing, Thanks