Great tutorial! I’m interested in learning more about how to iterate between the testing and training until you get to a sufficient quality of inference.
🐴 Fascinating! I have an OpenAI API-utilizing, localized web platform that uses the OpenAI API function calls to query/fetch extra data and such, I have been thinking if I should try out fine-tuning a GPT-3.5-16k instance for specific use case scenarios such as customer service bots that are up-to-date and would need less extra data fetching. This is especially important in non-English primary use cases where I find GPT-3.5's wording a bit lacking at times. Will definitely have to take a look at it. Thanks for the video. Regards, Horse
Awesome, simple and easy to understand..
Awesome explanation! Thanks!
Great tutorial! I’m interested in learning more about how to iterate between the testing and training until you get to a sufficient quality of inference.
Thanks! Easy to understand
🐴 Fascinating! I have an OpenAI API-utilizing, localized web platform that uses the OpenAI API function calls to query/fetch extra data and such, I have been thinking if I should try out fine-tuning a GPT-3.5-16k instance for specific use case scenarios such as customer service bots that are up-to-date and would need less extra data fetching. This is especially important in non-English primary use cases where I find GPT-3.5's wording a bit lacking at times. Will definitely have to take a look at it. Thanks for the video. Regards, Horse