Thank you for leaving a GREAT on my videos! I know I am fantastic and since you need to attack me verbally, never mind, if it helps you, I am here for you. Just a hint: this is part three of the series, so do not look at the explanation in the first two videos. You might get humiliated by your comment. See you!
@@code4AI I had to read the code myself even just to figure out how prompt.txt is used in conjunction with seed_tasks.jsonl (something which your video only made more confusing). But you want to believe what the other commenters wrote... that you're going a great job. What can I say? Great to be you!
NO! You ...YOU had to read the code yourself!? I am shocked! If I would have known before, I would have send somebody over to do the reading for you! Now I understand your complaints so much better .... enjoy your day! And thanks for the great job compliment!
this is great. I think what's missing from the explanation 20:40 and forward is the detail about how the supervised labels (targets) are used during training. is it correct to say: the input contains the full string, source+target concatenated. but the source portion of the prompt is ignored when calculating the loss. in other words, the token prediction begins only after source_len tokens. seems reasonable but I'm new here.
if Google can't place commercial restriction on the output of Google search, why can OpenAI place limits on their search results? Especially, given that their model is trained on the public web.
@@code4AI Maybe the country regulators, for example the EU, should step in banned chatgpt if they dont open source it 🤔😁 and most probably they used the input use by the chatgpt to fine tune their model also by let us pay for that😆
Great series, really enjoyed it.
You're basically just reading out the Stanford alpaca codes without any understanding. Great!
Thank you for leaving a GREAT on my videos! I know I am fantastic and since you need to attack me verbally, never mind, if it helps you, I am here for you. Just a hint: this is part three of the series, so do not look at the explanation in the first two videos. You might get humiliated by your comment. See you!
@@code4AI I had to read the code myself even just to figure out how prompt.txt is used in conjunction with seed_tasks.jsonl (something which your video only made more confusing).
But you want to believe what the other commenters wrote... that you're going a great job. What can I say? Great to be you!
@@code4AI Oh, ya. I watched the other two parts as well. Needless to say, I am watching no more. Reading the code is so much more effective.
NO! You ...YOU had to read the code yourself!? I am shocked! If I would have known before, I would have send somebody over to do the reading for you! Now I understand your complaints so much better .... enjoy your day! And thanks for the great job compliment!
@@code4AI Dude, your video is literally titled "The ALPACA Code explained"
this is great. I think what's missing from the explanation 20:40 and forward is the detail about how the supervised labels (targets) are used during training. is it correct to say: the input contains the full string, source+target concatenated. but the source portion of the prompt is ignored when calculating the loss. in other words, the token prediction begins only after source_len tokens. seems reasonable but I'm new here.
Can you please make a "Hands On" video making your own fine-tuning based on their process?
awesome series
if Google can't place commercial restriction on the output of Google search, why can OpenAI place limits on their search results? Especially, given that their model is trained on the public web.
American law. ...and a corporate lawyer takes $1500 per hour.
@@code4AI Maybe the country regulators, for example the EU, should step in banned chatgpt if they dont open source it 🤔😁 and most probably they used the input use by the chatgpt to fine tune their model also by let us pay for that😆
It's not illegal to read a book, just because it's illegal to steal one
Are the words "input" and "output" treated as special words in LLM?
No. Take "in" and "out" consistently throughout your code sequences, if you prefer.
It uses openai api right? And it's not free