🦙 LLAMA-2 : EASIET WAY To FINE-TUNE ON YOUR DATA Using Reinforcement Learning with Human Feedback 🙌
Вставка
- Опубліковано 3 лип 2024
- In this video, I'll show you the easiest, simplest and fastest way to fine tune llama-v2 on your local machine for a custom dataset! You can also use the tutorial to train/finetune any other Large Language Model (LLM). In this tutorial, we will be using reinforcement learning with human feed back to train our llama, which will accelerate it performance.
This technique is how this model are trained and in this video we will see, how to finetune this llm.
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)
Free Google Colab for 4bit QLoRA fine-tuning of llama-2-7b model
Rise and Rejoice - Fine-tuning Llama 2 made easier with this Google Colab Tutorial
✍️Learn and write the code along with me.
🙏The hand promises that if you subscribe to the channel and like this video, it will release more tutorial videos.
👐I look forward to seeing you in future videos
Links:
Dataset to train: huggingface.co/datasets/Carpe...
Reward_dataset:
huggingface.co/datasets/Carpe...
Second Part:
• 🐐Llama 3 Fine-Tune wit...
github.com/ashishjamarkattel/...
#llama #finetune #llama2 #artificialintelligence #tutorial #stepbystep #llm #largelanguagemodels #largelanguagemodels - Наука та технологія
Loved it. definitely Gona try. Waiting for next video.
Glad you liked the video
Nice video, thanks!
Hello Brother Your video is so much informatic and its cover each and every part like theories. Their coding is their explanations there. But can you possible, like making one good demo, like you provide the one Paragraph and showing how to data work and how to generate output like a demo of model before and after
can you please link the research paper showed at the beginning of the video? Thanks!
PS: great video! keep up the amazing work
arxiv.org/abs/1909.08593 here it is. Sorry for the late reply was quite busy
This is a good one , but our custom collected data doesnt have positive / negative columns. It maybe nice if you could a make video about:
how to create a custom finetuning database for llama2 (without negative and positives), but also how to use it. None of the videos on the internet focusing on the step 2. They just build the database with autotrain and do nothing after
This video is specifically for RLHF, which requires the po/ne data. If you want to fine-tune without po/ne, you may search for sth like 'LORA'.
@@minjunpark6613 ty, after some more thinking, i may convert the database fit for RLHF :)
Look forward to watching new videos on this topic. When will you upload the new video?
Thank you for the comment. If every thing goes good, there will be next video of rlhf tomorrow else saturday.
@@WhisperingAI How does it go? haha
Haha i will upload it today. Sorry for delay
@@WhisperingAI Nice. Hope everything goes well. Thanks!
dear sir i have a question during generation of the output in llm the tokenizer is not giving the same summary but giving the same text which was passed in prompt
This is the common problem while doing finetuning.
Please check dataloader and try increasing context length. I will solve the issue
@@WhisperingAI sir are you talking about this max length
data_path = "test_policy.parquet"
train_dataset = TLDRDataset(
data_path,
tokenizer,
"train",
max_length=256,
)
@@Paperstressed yes please increase it to higher 512 or 1024
Would you be able to share the notebook link as well?
Thank you for the comment. Since i have done the code in local, I will be only able to share the code after next video(which is after 2-3 day). Sorry for the inconvenience.
Dear sir can you teach us how to fine tune llm model for question answer
Sure i have this video, you can check
Its step by step process without talking.
ua-cam.com/video/FMd15f_rzGc/v-deo.html
Can i use Falcon model for this code ? any thing to keep in mind while using Falcon .
Yes you can use any model to do this. There is nothing to keep in mind if you follow the tutorial.
For the same use case as in the above tutorial?
@@sauravmohanty3946 yes
If I want to add humanu feedbackcan i do??
and if Yes than How ??
Human feedback is the dataset that is created in step 1 and 2.
So you can create your own dataset that matches that format to train all the 3 steps
@@WhisperingAI like in model traing time can i put some kind of feedback like some of Label selection during model training??
The colab code not worked... it shows " cuda" error in step 3... can you pls help
Can you please provide explaination about issue?
0%| | 0/3 [00:00
How to save and push model to hugging face?
Thanks for the comment. In that case you can just add an argument in the trainingargument as push_to_hub=True or just call trainer.push_to_hub() after training the model. Hope this help
@@WhisperingAI but that just pushes the adaptors though? You can’t do inference w that
I am not able to get you for which video you are talking about, but i guess you are using Lora and peft and defining your model via Peft, in that case you need to save the model via
1.model.base_model.save_pretrained("/path_to_model")
2. load the model
3. model.push_to_hub()
Great video, I would like to create my own fine tuned model for my company. Could I contact you?
Great content. Can you please share the code?
Thank you for your comment, but i will be only able to share code after next video which i planned after 2-3 day which will be full tutorial and source code.
Sorry for the trouble.
@@WhisperingAI for sure waiting
@@brainybotnlp
colab.research.google.com/drive/1gAixKzPXCqjadh6KLsR5ZRUnb8VRvZl1?usp=sharing
Is 12 GB enough VRAM to fine-tune model with 4bit QLoRA ?
It might not be enough. As llama is 13 gb, loading it in 4bit decrease size by 4 time. So as per the calculation it
Requires 4×(13/4)=13 gb of vram but might increase till 15 due to type of optimizer you are using.
is it possible to offload some part of memory requirement to cpu/ram during fine tuning?@@WhisperingAI
There is an error message when I try to install trl I don't know why I am stuck... Can I have your email? To discuss with you about this issue?
Can you create raise issue in git ?