Super hyped about this one! Kicking off a machine learning coding series! I'll be walking through the code behind many of the papers I've covered over the last few years - starting with OpenAI's CLIP! Do let me know how you find this one - feedback is very much welcome! Is the code too tiny? Too many details? You love/hate the format? Whatever do let me know!
Please do for topological graphs. Clip is easy but the hyperbolic convolution, neural sheaf diffusion and ricci flows went over my head. I only barely understood some high level concepts but like to make proper use. So if you can do coding series on that it will be uber helpful
This is great! With all the new fancy models I kind off felt left behind, but this is surely going to help me to learn how these models work under the hood. Thanks, and make more like this!
Can we have code implementation of neural sheaf diffusion, ricci flows and hyperbolic graph convolution. I was liking that flow of works…. So maybe some code demos will be very helpful to see how i can implement them
Good content! I actually want to work on something similar🤣 When it comes to feedback, I would suggest more high-level overview of functions(maybe like a list or visualisation) and overall model structure before you dive deep into the explanation of nitty-gritty details of the code. Nevertheless, great job bud!
Great stuff as always! A not necessarily related question (but came to my mind after seeing you using PyTorch here): Do you have the freedom of what framework to use at work, or is DM fixed on their JAX ecosystem? Looking forward to the next episode in the series 🥳
we fix the size to be 77, in order to be independent of the text lengths, so the token is going to look smth like [ # # # # 0 0 0 0 0 0 0 0 ... 0] where # correspond to text value and 0 to the empty. In short, think of texts encoded as [ word word word word word empty empty empty empty empty] --> [ # # # # # 0 0 0 0 0 0...]
@@TheAIEpiphany 😂😂I thought you were being like sarcastic or something and just trying to hide it 😂 Well I hope you enjoy it, it’s pretty good I think. Also thanks a lot for your videos, they are really helpful and inspiring , I too will soon want to make teaching videos and stuff once I get a grip on more concepts
Great Video I am not able to join the Discord Channel Actually my account was hacked by my frnd and he spammed some channel Did you block me for this? Plz un-block? Thanks
Super hyped about this one! Kicking off a machine learning coding series! I'll be walking through the code behind many of the papers I've covered over the last few years - starting with OpenAI's CLIP!
Do let me know how you find this one - feedback is very much welcome! Is the code too tiny? Too many details? You love/hate the format? Whatever do let me know!
The best thing about this is you are really taking the time to explain the shapes which is appreciated!
Please do for topological graphs. Clip is easy but the hyperbolic convolution, neural sheaf diffusion and ricci flows went over my head. I only barely understood some high level concepts but like to make proper use. So if you can do coding series on that it will be uber helpful
someone give this man a nobel prize
humble
This is great! With all the new fancy models I kind off felt left behind, but this is surely going to help me to learn how these models work under the hood. Thanks, and make more like this!
100%, CLIP is behind many of the recent interesting papers
Thanks Aleksa for this long and well explained videos. Really helped a lot.
1:20:25 - Its the temperature parameter (usually a tuned hyperparameter, but a learnt parameter in this case)
came across this channel today! thanks a lot
Dude, I LOVE this format! Casual code AND math explanation, how awesome?!?!?!
Great explainer Aleska! This is going to be so useful for many. Thanks for sharing. Mike
It has been a while man!
Always happy to watch your videos!
I know right! Frequency is going up now :))
Great video! Loving these series
thanks, these code walkthroughs are super helpful. keep doing more such videos.
Thank you!
I'm trying to contribute more to Disco Diffusion.
This video is fantastic. Thank you for putting it together.
Can we have code implementation of neural sheaf diffusion, ricci flows and hyperbolic graph convolution. I was liking that flow of works…. So maybe some code demos will be very helpful to see how i can implement them
Thanks, nice feedback, if others want it upvote this comment!
1:20:00 is there a proper explanation of why there is a logit scale factor when calculating similarity? Thanks.
watch with 1.5x and in some text encoding moments think this guy is little fast,(lol) but it was great. go ahead and wait new content like this.
this vedio is so gorgeous! and it helps me a lot! Thank you so much!
amazing job! Thank you!!
Good content! I actually want to work on something similar🤣 When it comes to feedback, I would suggest more high-level overview of functions(maybe like a list or visualisation) and overall model structure before you dive deep into the explanation of nitty-gritty details of the code. Nevertheless, great job bud!
great explanation!
Great stuff as always! A not necessarily related question (but came to my mind after seeing you using PyTorch here): Do you have the freedom of what framework to use at work, or is DM fixed on their JAX ecosystem? Looking forward to the next episode in the series 🥳
Video was something great
Thanks, Aleksa 👋
But why the size after the text encoder is always 77 ?
Seems like we have different text lengths for imagenet prompts 😲
we fix the size to be 77, in order to be independent of the text lengths, so the token is going to look smth like [ # # # # 0 0 0 0 0 0 0 0 ... 0] where # correspond to text value and 0 to the empty. In short, think of texts encoded as [ word word word word word empty empty empty empty empty] --> [ # # # # # 0 0 0 0 0 0...]
@@tahirmiriyev7003 thank you. But still, why is it exactly 77 ?
great explanation. can you do a series on NeRF
I'm not sure how I ended up here.. I must get back to the simple-minded UA-cam before brain explodes.. 🤯🤯
What does clip actually encode into the length? When normalizing, don't you lose some information?
2 years in, still tryin to understand that damn regex
Where can I get this notebooks?
I see you were watching Lawrence V Hamza Debate😉
?? 😅 What's that?
Oh hahah, just realized, my Chrome tab. I actually haven't watched it yet - just standing there lol
@@TheAIEpiphany 😂😂I thought you were being like sarcastic or something and just trying to hide it 😂
Well I hope you enjoy it, it’s pretty good I think.
Also thanks a lot for your videos, they are really helpful and inspiring , I too will soon want to make teaching videos and stuff once I get a grip on more concepts
Brain 🧠 died
Great Video
I am not able to join the Discord Channel
Actually my account was hacked by my frnd and he spammed some channel
Did you block me for this? Plz un-block?
Thanks
Its not Jew biter note book 😂. It’s jew peter notebook 😂