Training a Model in Hugging Face (11.5)
Вставка
- Опубліковано 4 жов 2024
- This video shows how to use PyTorch to finetune an existing HuggingFace model.
Code for This Video:
github.com/jef...
~~~~~~~~~~~~~~~ COURSE MATERIAL ~~~~~~~~~~~~~~~
📖 Textbook - Coming soon
😸🐙 GitHub - github.com/jef...
▶️ Play List - • 2024 PyTorch Version A...
🏫 WUSTL Course Site - sites.wustl.ed...
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🖥️ Website: www.heatonrese...
🐦 Twitter - / jeffheaton
😸🐙 GitHub - github.com/jef...
📸 Instagram - / jeffheatondotcom
🦾 Discord: / discord
▶️ Subscribe: www.youtube.co...
~~~~~~~~~~~~~~ SUPPORT ME 🙏~~~~~~~~~~~~~~
🅿 Patreon - / jeffheaton
🙏 Other Ways to Support (some free) - www.heatonrese...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#PyTorch #finetune #huggingface
So the title is wrong... you're not training model "in" huggingface, you're training "with" models and dataset "coming from" huggingface. You actually train the model in Colab.
Thank you so much for this quick demo!
❤I like your tutorial
yea! huggy face! yup! they've got it all !!
Great video. Just out of curiosity, I know I could look through your channel, but do you have a video on quantizing an LLM? Let’s say a 32bit FP to and 8 or 6 bit FP. Pros and cons, besides the obvious smaller and less accurate?
Have not done a video on that yet, great idea, though.
What does colab pro+ offer you more than plain pro?