To Fine Tune or Not Fine Tune? That is the question

Поділитися
Вставка
  • Опубліковано 9 лис 2024

КОМЕНТАРІ • 5

  • @dipteshbose
    @dipteshbose Рік тому +3

    Awesome, simple and easy to understand..

  • @ai4sme
    @ai4sme 8 місяців тому +1

    Awesome explanation! Thanks!

  • @akilja2011
    @akilja2011 11 місяців тому

    Great tutorial! I’m interested in learning more about how to iterate between the testing and training until you get to a sufficient quality of inference.

  • @JackGuo-l2x
    @JackGuo-l2x 11 місяців тому

    Thanks! Easy to understand

  • @TheHorse_yes
    @TheHorse_yes Рік тому

    🐴 Fascinating! I have an OpenAI API-utilizing, localized web platform that uses the OpenAI API function calls to query/fetch extra data and such, I have been thinking if I should try out fine-tuning a GPT-3.5-16k instance for specific use case scenarios such as customer service bots that are up-to-date and would need less extra data fetching. This is especially important in non-English primary use cases where I find GPT-3.5's wording a bit lacking at times. Will definitely have to take a look at it. Thanks for the video. Regards, Horse