So happy you appreciate the name! I spent some hours to find a good one, without success. Then I made myself a coffee and then the name AI Coffee Break popped into my mind. I did not have a second thought afterwards. 😅
"The mathematical details of this adaptation are not extensively detailed in the ViT paper, beyond specifying that they use learnable 1D positional embeddings." Could you make a video on the mechanics of how this is done? I love you and your videos btw.
I like how you named this series Ai coffee break. Such nice explanation we can get literally in the time when we have coffee
So happy you appreciate the name! I spent some hours to find a good one, without success. Then I made myself a coffee and then the name AI Coffee Break popped into my mind. I did not have a second thought afterwards. 😅
Never closed so fast the Attention is all you need paper until I received the notification for this video
Haha, did not expect that reaction, lol! 😂
Thank you from Saudi Arbia, the way you explain things is impressive thank you Ms. Coffee bean
🤗
Thanks!
Omg. Thank you so much! 😊
What perfect timing. I'm currently preparing a presentation of the new T2T-ViT model for a job interview! Great job, as always.
Good luck with your interview! 🤞
Great job as always! The description is also pretty well written!
Yay, thank you!
Your videos are extraordinarily insightful. Love it!!!
Oh wow, thanks for showing your appreciation on Ko-Fi! Just saw it. 🤯
@@AICoffeeBreak Absolutely! 😄
"The mathematical details of this adaptation are not extensively detailed in the ViT paper, beyond specifying that they use learnable 1D positional embeddings." Could you make a video on the mechanics of how this is done? I love you and your videos btw.
this discussion is very interesting!