Hi Jay, I love the work you have done! Ever since I read the Illustrated Transformer, I was blown away by your explanations and illustrations. You really explain advance concepts with such clarity and simplicity. I am very grateful to you for that! I really look forward to reading and learning from your book! Thank you so much!!
Hi Jay, I love your presentation, it is so inspiring and you make the hard concepts simple and clearer. Regarding the tokenizer, if every word is one token and the same is mapped over a single vector (embeddings) then how do LLMs clearly understand the meaning of the same word in different contexts? I will appreciate your answer and I am sorry if my question is too naive. Thank you
Hi Jay - Great video.. Wondering if this is similar to computers doing everything in 0s and 1s although from the OS level the abstraction is different . At least conceptually. Coming to the book , I am not able to find it anywhere... Is there a link ?
Hi Jay, thanks again for explaining a complex topic in simple way, if I may ask, what tool do you use to generate graphics for your blogs? Thanks in advance
could you perhaps do an even deeper dive about how these models exactly produce the output vectors and then how those get turned into these tokens and stuff?
And how the people know wich tokenizer is the best way to split the vocab? This follow a math rule or statistical pattern? or it depend on the computing budget?
Wonderful! If you feel comfortable to tackle this now, then this video has done its job. We'll address it more in the book (and possible a subsequent video). But if you wanna get into training tokenizers now, this is a great guide: huggingface.co/learn/nlp-course/chapter6/5?fw=pt
Hi Jay,
I love the work you have done! Ever since I read the Illustrated Transformer, I was blown away by your explanations and illustrations. You really explain advance concepts with such clarity and simplicity. I am very grateful to you for that! I really look forward to reading and learning from your book! Thank you so much!!
Can you please share the link of that?
yeah. Same here, shout out for that.
Great video as always Jay! :)
Thank you Louis!
This is incredible. Great work! Keep it up :)
Great content as always, Jay!
I would love if you could go into the following:
RLHF.
PPO.
PEFT.
LORA etc.
Adapters.
soft-prompting.
scaling transformers.
Delicious topics indeed
Hi Jay,
I love your presentation, it is so inspiring and you make the hard concepts simple and clearer. Regarding the tokenizer, if every word is one token and the same is mapped over a single vector (embeddings) then how do LLMs clearly understand the meaning of the same word in different contexts? I will appreciate your answer and I am sorry if my question is too naive.
Thank you
Neither have our neurones
Aha! But which neurons though!
Hi Jay - Great video.. Wondering if this is similar to computers doing everything in 0s and 1s although from the OS level the abstraction is different . At least conceptually. Coming to the book , I am not able to find it anywhere... Is there a link ?
Hi Jay, thanks again for explaining a complex topic in simple way, if I may ask, what tool do you use to generate graphics for your blogs? Thanks in advance
could you perhaps do an even deeper dive about how these models exactly produce the output vectors and then how those get turned into these tokens and stuff?
Not much has changed since my videos on GPT3, honestly. Check those out.
Great Work
Hi Sir, your video always amazed me. Need more videos for sure. Can you please share the notebook link.
Thank you! Haven't published the notebook yet, but that's a good idea
Great!!
Thank you very much
❤ very nice
Good one
And how the people know wich tokenizer is the best way to split the vocab? This follow a math rule or statistical pattern? or it depend on the computing budget?
Hi Jay, thanks for the video. Could you also please share the code?
Great! But how does the tokenizer works now? 😅
Wonderful! If you feel comfortable to tackle this now, then this video has done its job. We'll address it more in the book (and possible a subsequent video). But if you wanna get into training tokenizers now, this is a great guide: huggingface.co/learn/nlp-course/chapter6/5?fw=pt