does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
Sir, it is very informative. thank you very much. Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
Hey, thanks for the explanation. Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
Very simple way of explaining deep learning. Awesome playlist
Thanks and glad u liked. Your comments keep me moving
Best way you explained the concept .
does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
Hi, thanks for your really nice explanation, would you please explain a bit more why do we have 96 filters ?
Your, voice is similar to Ravi Ashwin. Great explanation
Oh ho...thanks buddy..
it is a good explanation in a simple way. Thank you.
Thanks and glad you liked it
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
lemme know if you have found the answer
Hey Bro, you told transfer learning, where is the weights for this. I think we are using only architecture of the AlexNet, how to apply weights
You didn't use batchNormalization or according to paper Local response Normalization
hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
Nice effort, Appreciated!
You talked about augmentation? I cant find this in the code...
Aygmentation is dealt separately
Can you please provide example code?
Nicely explain and very easy to understand
Thanks
Is it possible to build the entire model without using the library?
Yes. But, bit challenging
You have made it very easy to understand
Thank you
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
Great explanation sir!!
Keep watching
u can train this AlexNet in tensorflow. but tensorflow won't provide pretrained weights for AlexNet. only pytorch will provide that.
can you please share the code of AlexNet, it would help alot
I will upload in GIT tomorrow..username shriramkv
@@ShriramVasudevan add the link to it in description please
Thanks for the explanation in a simple way
Thanks and glad u liked it
Sir, it is very informative. thank you very much.
Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
Hey, thanks for the explanation.
Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
To my knowledge... it should work with any dimension. There is no hard rule about the dimensions bro.
The larger the image dimension, the more time it's gonna take for training and inference!
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
did you know the answer because I have the same question ?
This is one of the best.
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
hello i need code
can you please share the code
Pl pull it from my git. @shriramkv
thanks
You're welcome!
Very nicely done
please provide the code sir
Very well done..
Sir, thank you very much!
Thank you
this is the best I have found on internet but you would have implemented with some example
I have done that. Video follows shortly brother.
@@ShriramVasudevan Please upload it bro we are waiting...
clearly explained !
Thanks hasina
why 227x227? The paper itself says 224x224? could you please explain
sir very nice explanation and as well slides. thanks sir. kindly can you share the slides?
Thanks
Please sir can you send me code
It's in my GIT.. @shriramkv
Very nice sir...
Thank you very much
Thanks too much
Thanks Brother.
👍
Thanks and glad you liked it
nice
Thanks and glad u liked