[Classic] Deep Residual Learning for Image Recognition (Paper Explained)
Вставка
- Опубліковано 15 чер 2024
- #ai #research #resnet
ResNets are one of the cornerstones of modern Computer Vision. Before their invention, people were not able to scale deep neural networks beyond 20 or so layers, but with this paper's invention of residual connections, all of a sudden networks could be arbitrarily deep. This led to a big spike in the performance of convolutional neural networks and rapid adoption in the community. To this day, ResNets are the backbone of most vision models and residual connections appear all throughout deep learning.
OUTLINE:
0:00 - Intro & Overview
1:45 - The Problem with Depth
3:15 - VGG-Style Networks
6:00 - Overfitting is Not the Problem
7:25 - Motivation for Residual Connections
10:25 - Residual Blocks
12:10 - From VGG to ResNet
18:50 - Experimental Results
23:30 - Bottleneck Blocks
24:40 - Deeper ResNets
28:15 - More Results
29:50 - Conclusion & Comments
Paper: arxiv.org/abs/1512.03385
Abstract:
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Links:
UA-cam: / yannickilcher
Twitter: / ykilcher
Discord: / discord
BitChute: www.bitchute.com/channel/yann...
Minds: www.minds.com/ykilcher
Parler: parler.com/profile/YannicKilcher
LinkedIn: / yannic-kilcher-488534136
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n - Наука та технологія
This is a pre-recorded scheduled release :D still on break :)
Yannic Kilcher welcomed surprise 😄
for someone like me who has ventured into neural nets recently, this explanation is a boon. IT was like listening to classics. Legendary paper and equally awesome explanation.
Yep, revisiting this classic paper in your usual style was still interesting to me. Thanks as always
I've watched like 5 other videos explaining ResNets and this was the only video I needed. Thank you so much for explaining it so clearly!!
Thanks for this! And I really enjoy going through the old papers, since you can pick up things you missed when first reading them. Enjoy the break!!
I really wanted to drop you a line that I really, really enjoyed your paper walkthrough; super informative and entertaining! Thank you so much for uploading this! :)
This is probably one of the better videos on these classic research papers on UA-cam. I've seen some terrible explanations but you did pretty well. Good job!
Really enjoyed this video! I think going through these older paper that had lasting impact for multiple years is really a great insight especially to those who are fairly new to the field like me
Hey Yannic, you are such a good company for learning deep learning. You lifted me from all the struggles. Thank you for sharing your insight.
Revisiting the classics which had massively changed and forged the direction for DL research is so fun. Loved the way you explained the things. So cool. Thanks a lot :)
Visiting old and influential papers seems like a great idea
I was doing something similar for a few decades before this paper came out (no ReLU on the stage output, though). I was engaged in studies in layer by layer training, and the argument for me was "why spend all that time generating a good output for layer k, just to distort it in layer k+1?" Also, I think the physicist in me liked the notion of nonlinear perturbation of a linear model, since linear models work really well a lot of the time (MNIST, I'm looking at you). At any rate, this approach worked quite well in the time series signal processing I was doing, and when the paper came out, I read with relish to see what else they had found that was new. Unfortunately, like you I found that underneath the key idea was a heap of tricks to make the whole thing hang together which seemed to obscure how much was ResNet and how much was tricks.
Thanks Yannic. Revisiting these classic papers is very helpful for beginners like me.
Thanks. this was fun. I knew some of it but you put it on context.
Please do more of these classics. If you can, maybe something on UNET/fully convolutional basic papers.
This is a great series. I'm a very experienced software and hardware engineer who's just now getting serious about learning about ML and feel learning and the whole space. So, what really helps me at this point is not NN 101 but what is the landscape, what do all the acronyms mean, what is the relative importance of various ideas and techniques. This review of classic material is extremely helpful: it paints a picture of the world and helps me put things in their places in my mental model. Then I can dive deeper when I see something important for my current tasks and needs. Keep these coming!
Yannic - you are doing a superb job. Your quality content has "lower dopamine rush effect". Thus, it wud not be viral, but with time you would be a force to reckon with. Not many can explain with so much clarity, depth & speed(daily 1 paper). I have one request. If you can create an ACTIVE mapping of paper with CITATIONS(and similar metrics) so that I get to choose the MOST RELEVANT PAPERS to watch. It would be a great time saver & drastically improve views on better metric videos :) .
COOL ! To discuss those classics. A formidable tribute to the writers and a great way to emphasize their contributions to the history of Artificial Intelligence.
Great video! I have watched like five videos about ResNet on youtube and this one is by far the best. Thanks.
Loved it and subscribed! And yes please do more of classics!
I've used Resnets quite a bit and thought I understood the paper reasonably well when I read it, but I was wrong. Great video!
Great video! I think there is a lot of value on reviewing old papers when they a cited all the time by the new ones. That is exactly the case of ResNets.
im struggling to understand papers, but your explanation to me it really hand held me to grasp this particular paper. For that to me you are awesome. Thank you so much
Thanks for visiting iconic papers, great content!!!
Thanks for this videos the classic series, not all of us have masters or PHD degree, this classic papers help us to understand the main and core ideas of deep learning, papers that important and push fordward the field.
"Sadly, the world has taken the ResNet, but the world hasn't all taken the research methodology of this paper." I really appreciate your picks are not only those papers surpassing the performance of the state of the art, but also those with intriguing insights or papers inspiring us by their ways of conducting experiments and testing hypotheses. Most vanish, but residual, as it moves forward.
Another excellent summary! Yannic is one of the best educators out there!
Thanks for these great explanations , still a beginner in deep learning but I understood the paper very well !
Great paper. It must be obvious to you but, to a layman, I finally understand where the "Res" in "ResNet" comes from. Great work.
Would love to see papers like these which have used unique tricks to train, I request you do more videos on paper which solves the problem of training neural networks, tips and tricks and why they work. Why local response normalisation works, what's the best way to initialise your network layers for a vision task, for a NLP task. In a nutshell what works and why.🙏
nice explanation. I've read the paper before and missed a lot of details. still more insights to learn from that paper.
Thank you for explaining it! So much easier for a beginner like me to understand
Building hype for attention is all you need v2! Nice selection!
Thank you! this is unbelievably helpful as someone whos just starting out. subscribed!
I like how you have highlighted that if there is a small architecture exists that can solve a problem, residual connections will help discover it from within a larger architecture - I think this is a great explanation of the power of residual connections. This has two nice implications. I do not need to worry that I should exactly find how many layers are appropriate to capture. I can start with a supersized architecture and let training reduce to the subset that is needed! Let data carve out the subnetwork architecture. Secondly, even if the subnetwork is small, it is harder to directly train a small network. Easier to train a larger network with more degrees of freedom which functionally reduces to the smaller network. One can distill later.
Excellent video featuring an extraordinary paper. Good job bro
10:50 This should have been so obvious, how did I never think of it like that 😨
This is beautiful! a beautiful paper and a beautiful explanation, simplicity is genius!
Great discussion of the paper. Thanks for doing this.
I think the identity layer on a 3x3 matrix wud be a diagonal set of 1 instead of a 1 in the center. @Yannic Kilcher 08:50
Very insightful explanation for beginners like me. Thank you.
Amazing explanation. Keep up the good work!
Loved the way u are reviewing papers.
Wow, you read the author names perfectly!
self learning anns and coming across these papers is daunting - tysm!!
what a fantastic summary, thank you very much !
Would love a video enumerating with explanation all the learned lessons organized by importance to modern solutions.
Another Great one! Would like to request if a review is possible on angular losses especially ArcFace, as it has begun being adopted for multiple classification tasks as another *classic* review.
Thanks!
Amazing narration, keep up the excellent work.
Your explanations resonate so good with me that is like pushing knowledge directly in my head. Does anyone has the same feeling?
Loved the explanation Thank You so much!
Thank you for this clear explanation!
Very enjoyable, insight filled presentation, Yannic, thanks! It almost seems like residual connections allow the network to only use the layers that dont corrupt the insight. Since every fully connected or convolutional layer is a destructive operation (reduction) of its inputs, signal may get distorted beyond recovery over a few blocks. By having a sideline crosswire where not only the original input but any derived computation can potentially be preserved at each step, network is freed from the 'tyranny of tranformation'. :)
Both the paper and Yannic highlight the idea that - the goal shifts from 'deriving new insights from data' to 'preserving input as long (deep) as needed' - while all other types of layers in a network distort information or derive inferences from data, the residual connection allows preserving information and protecting it from being automatically distorted, so that any information can be safely copied over to any later layer.
residual connection can be seen as similar to the invention of zero to arithmetic.
Love these reviews of earlier landmark papers! Thanks!!!
I loved this paper. Resnets are still cool. Nowadays there are a more complicated versions of these nets but the ideas still pretty much hold these days.
Nice video by the way.
I love the old papers idea! Nice video
That was a short break
:D
It's pre-recorded :)
Thank you so much! Keep making such awesome videos
Best explanation I have seen, nice work
Thank you very much for the explanation! I'm just starting to use the pretrained nets I wondered how could I improve the performance of my models, and this video cleared many doubts I had. Keep up the amazing work!
Thank you Yannic for this great work
Revisiting classic paper is SO NICE for new people enter into the field to understand the history of the million tricks that get automatically applied nowadays.
Thank You for this beautiful explanation!!
This is really valuable tbh. Great video!
Great idea to review classic papers.
This helped me so much , big thanks to you
Thanks so much ! This is extremely helpful
looking forward to more videos like this!
Universal transformer please! Love your videos, great job
I love this series on historical papers
Thanks a lot ! Amazing explanation :)
Love the [Classic] series.
Very Helpful. thanks a lot. 👍👌
Hats off to Dedication level 💯
such a great explanation... tysm
Nice review about residual network!
Little question about the connections when the shape changes: a simple 1x1 convolution can give the right depth but the feature maps would still be the original size. So I assume the 1x1 convolutions are also with stride 2?
24:06 I think LeNet also did something similar but my memory fades.
.
Legendary paper. Great work. Too bad, I think in last two years we havent seen any major breakthroughs.
Are large scale use of transformers not a big breakthrough?
@@dipamchakraborty Transformers came out in 2017, if I remember it right.
Came here from DongXii to support our NIO superstar, Ren Shaoqing!
I've got the impression that you're a very good chinese speaker for your pronounciation of the authors' names.
Very clear thank you!
This is it!!!!! Great thanks from South Korea!!!!!
Please make more videos on Classic Papers..like yolo..inception!!
Very nice, thanks! :)
You are back! I was getting withdrawals lol
Great video! Thanks
I love how you will *not* review papers based on impact, except when you do :D
JK, please mix in more [classic] papers, or whatever else you feel like - just keep the drive for ML. Is's contagious! 💦
An idea: combined review/your take of a whole class of models (eg. MobileNet and its variants &| YOLO variants)
That was a great explanation.
Fantastic explanation
excellent explanation
If questions due to the parameters of the ResNet. As far as I understood it you concatenate the input with the output of another layer. This enables you to train more stable networks. Why does this lead to fewer parameters than the VGG? I would suggest that this is the case because you perform the more costly operations (more filters) on layers that are already reduced in their dimensions due to the stride? Is this correct?
In 3.4 - Implimentation, it says that they use BN after each conv layer and before the activation. Does this stand true for ResNet50+? In the bottleneck blocks, do they add BN after the first 1x1 conv layer and then the 3x3 and lastly the 1x1 again? Or was the Implementation part, discussing the ResNet34 structure?
Thank you man.
thank you!!
i dont understand from the starting of the quoted statement (which I will write) upto 9:28, you are saying, "instead of learning to transform X via neural networks to X, which is an identity function, why don't we have X stay X and then learn whatever we need to change?"
can you explain me this part with some analogy? I am beginner here. Thanks !!
Give us a chance too catch up!
Very good video
Thanks!
can you please continue explaining more papers
I disagree with the assertion that the layers are learning “smaller” functions in ResNets. The results cited to support this claim, that the activations of the layers in the ResNets are larger than those in comparable feed-forward networks, can be caused by small weights and large biases, which L-2 regularization would encourage since it only operates on weights and not biases. The average magnitude of the weights in a layer have no relation to the complexity of the function they encode, since the weights of a layer can simply be scaled down without drastically changing this function. Moreover, in their paper on the Lottery Ticket Hypothesis, Frankle et al. find that ResNets are generally less compressible than feed-forward networks, meaning the functions they encode are more complex than in comparable feed-forward networks.
What is inception-net hypothesis?? In xception-net paper, the author explained the hypothesis of inception-net. But I couldn't grasp it fully and get lost a bit. Can you explain that??
I'm sorry I have no clue what the inception-net hypothesis is, but also I don't know too much about inception networks.