Great series! Thanks for making the concepts approachable. These lectures are at a perfect level for understanding key concepts and for having the vocabulary and foundation for understanding other available materials. I especially found Ava's overview of Transformers and how the Q, K, and V matrices relate an "a ha" moment! Thank you, all.
Thank you, i have one doubt here, at 15:30 you said 10 k neurons in hidden layer for processing 10k parameters, so resultant would be 10k^2 parameters. My doubt is why we need 10 k neurons at any layer. we can decide the number of layers right?
It's just an example, choosing # of neurons and # of layers is an engineering task. Models tend to be able to solve complex tasks better the deeper (or wider) they are, and an example with a 100 x 100 image with 1 fully-connected hidden layer of 10,000 neurons would have >100M connections/weights.
@@primedanny417True, but there are plenty of examples of fully connected networks that work and train well on 128x128 sized grayscale images, for example. I know they aren’t HD quality or SoTA by any means, but to say FC nets are “completely impractical” as a blanket statement is a little strong IMO. Great lecture series-this is nit-picking here. We might as well criticize using the term “convolutional” without explaining it’s typically implemented as a cross-correlation and not a convolution while we’re at it! 😆
Thank you very much, it is a great lecture. I hope that you develop the lectures over the years as it seems to be the same contents. topics like pretrained models and knowledge transfer, YOLO might be good to be added to CNN
EACH COLOR- f RANGE. ACTIVE CMOS SENSOR... PHOTON>e BEAM IF 3 LED CAN PRODUCE MULTICOLOR, I 🤔 I CAN USE R,G & B BANDPASS FILTER TO GET THE SAME RESULT VIA SPECIAL PURPOSE DIGITAL OSCILLOSCOPE..😎😉
I have a confusion about the Lab 2 Part 2 ( facial Detection with CNN). It has been claimed that in the CelebA dataset most faces are of light skinned females. But the model ultimately gives lower accuracy for this category of faces compared to other three categories. Why is that?
Hello Alex, please enlighten the peasants with a juicy time series episode? If you had been my teacher since I was a kid, I would be a different person today. Thank you for this, grateful today and in the future.
Bahia hu hum ab hum tum huneaha sath rahneged university kit universe abantw aur oyra Karen’s gaadi mwd humne svn layered D muje apne array Adamu aki fire m me stover ki emowpwr hitw rehte hua is
Great series! Thanks for making the concepts approachable. These lectures are at a perfect level for understanding key concepts and for having the vocabulary and foundation for understanding other available materials. I especially found Ava's overview of Transformers and how the Q, K, and V matrices relate an "a ha" moment! Thank you, all.
Thank you for sharing quality content like this for free for several years
I don't even need to be in MIT to learn from them! Outstanding and clear delivery of difficult concepts.Thank you.
Dear Amini.was good trech too especially navigation too
I wanted to extend my sincere thanks for the wonderful lecture you delivered on Deep Learning.
While sliding window is good, YoLo outperforms Faster RCNN and is generally considered state of the art for object detection
Thank you, i have one doubt here, at 15:30 you said 10 k neurons in hidden layer for processing 10k parameters, so resultant would be 10k^2 parameters. My doubt is why we need 10 k neurons at any layer. we can decide the number of layers right?
It's just an example, choosing # of neurons and # of layers is an engineering task. Models tend to be able to solve complex tasks better the deeper (or wider) they are, and an example with a 100 x 100 image with 1 fully-connected hidden layer of 10,000 neurons would have >100M connections/weights.
@@primedanny417True, but there are plenty of examples of fully connected networks that work and train well on 128x128 sized grayscale images, for example. I know they aren’t HD quality or SoTA by any means, but to say FC nets are “completely impractical” as a blanket statement is a little strong IMO. Great lecture series-this is nit-picking here. We might as well criticize using the term “convolutional” without explaining it’s typically implemented as a cross-correlation and not a convolution while we’re at it! 😆
OMG, it's so intuitive !🤩
Waiting patiently
That's the spirit
Thanks for sharing this knowledge. Be blessed
Thank you for courses we are learning lot
thank for sharing that course , that's so usefull !
Great courses thanks!❤
Love the lecture!
Software Lab 1 still not made available, when will that happen?
It is published now
Thanks for the lecture
fantastic ! thank you for the lectures
Very nice Explanation
Thank you very much, it is a great lecture. I hope that you develop the lectures over the years as it seems to be the same contents. topics like pretrained models and knowledge transfer, YOLO might be good to be added to CNN
Thanks for this great lecture series.
However the audio is muffled at some points
Waiting ..
But the lab between Lecture 2 and 3 is still not published in the website?
I think it is not their practice to publish their lab work
It has been published now
The lecture is awesome but the quality of audio is very poor.
It's weird that he uses Boston Dynamics robots in his first slides, since boston dynamics has gone on record saying they don't use AI.
EACH COLOR-
f RANGE.
ACTIVE CMOS SENSOR...
PHOTON>e BEAM
IF 3 LED CAN PRODUCE MULTICOLOR,
I 🤔 I CAN USE R,G & B BANDPASS FILTER TO GET THE SAME RESULT VIA SPECIAL PURPOSE DIGITAL OSCILLOSCOPE..😎😉
I have a confusion about the Lab 2 Part 2 ( facial Detection with CNN). It has been claimed that in the CelebA dataset most faces are of light skinned females. But the model ultimately gives lower accuracy for this category of faces compared to other three categories. Why is that?
Where did you find the labs? Are they available on UA-cam?
Cant wait...
Hello Alex, please enlighten the peasants with a juicy time series episode? If you had been my teacher since I was a kid, I would be a different person today. Thank you for this, grateful today and in the future.
Time series intro lecture would be great to watch indeed!
I love you but the Keller Paradox points to overlooked emergence.
Great content, but audio sounds like it was recorded with a toaster
Have any of the labs been published yet?
yes
@@RajeevKumar-dq4ct Where? Are they free or are they paid?
holy smokes!
right?
Testing
Thank you for sharing, please i need a help and i send an email to you but no response, could you please help me?
thanks in advance.
somethings gotta be done about the mic with the questions its absolutely horrible sound!!!
Jackson Thomas Thomas Charles Thomas Donald
Bahia hu hum ab hum tum huneaha sath rahneged university kit universe abantw aur oyra Karen’s gaadi mwd humne svn layered D muje apne array Adamu aki fire m me stover ki emowpwr hitw rehte hua is
Iiiiiiiiiiiiiiii