49:59 how to project RoI onto feature map exactly? 50:10 does snapping projection to feature map grid affect transformation parameters of the bounding box regression?
No you get wrong understanding. Box was obtained using heuristic methods on the original picture. The convnet can be seen as a transformaion. It converts the cat's picture into feature map. The convert process is the process of projection
23:00 and 23:41 how is transformation learnt invariant to RoI warp?1. Warpping changes height and width. 2. Warped RoI are fed into CNN. I’d appreciate if anyone can shed some light here. Thanks.
Looking at this coming from NLP, NLP seems like so much easier where you just have a Transformer with a sequence classification/token classification head on top.. Here you have a very complex way of computing mAP, region proposals, non-maximum suppression procedure, anchor generation... Luckily, the introduction of DETR by Facebook AI (which replaces a lot of these handcrated features by a Transformer which learns everything end-to-end) seems really refreshing :)
I wonder if mean average precision could be calculated faster while still incorporating the performance of the bounding boxes by simply factoring the detections by their IOU's and using the results instead of rerunning at many different thresholds and averaging. For example, perfect Mean Average Precision would impossibly be the first detections all correctly identifying the detectable objects in the image, and the detections all had an IOU of 1.0. Essentially rather than calculating the area under a curve on a 2D plot with precision and recall and replotting many times at various thresholds. We would instead calculate a 3d volume, where a 2d plot of detections matched against a third dimension that represents the IOU (or some factored IOU if it's better). It seems to me that that would achieve the same results more quickly and elegantly, if anyone knows more though I would love to hear about it!
This is the best lecture that I have ever seen since SICP,so beautiful
thank you for posting such high-quality lectures online for free!! amazing lecturer, slides and content
Really love this step by step walk through! Hugh improve than the 2017cs231n course!
Great lecture, very welll explained, step by step. Maybe the best I found so far.
57:53 should be "from anchor box to proposal box"
Another amazing class! I look forward to watching the updated version describing the use of Transformers in the coming years. Thank you Dr. Justin.
I know it's quite off topic but does anyone know of a good site to watch new series online?
@@samuelimran3429 Can you send a link? I search google but dont see anything :(
@@chiendvhust8122 latest videos are not publicly available
49:59 how to project RoI onto feature map exactly? 50:10 does snapping projection to feature map grid affect transformation parameters of the bounding box regression?
No you get wrong understanding. Box was obtained using heuristic methods on the original picture. The convnet can be seen as a transformaion. It converts the cat's picture into feature map. The convert process is the process of projection
31:20 Purple box should be union of both the box. Here is it overflowing
I watched a lecture on RNN delivered by him on Stanford channel on YT, that was good
When we compute the average precision (42:52) is this for one image? a batch? the whole training set?
all test images
59:00 I don’t quite get the 2k anchor (2 scores) vs 1k (1 score) part. Hmmm
42:12 I am really confused about why all dog detections are considered positive here (precision = 3/5)? Shouldn’t we set a threshold? Thanks.
23:00 and 23:41 how is transformation learnt invariant to RoI warp?1. Warpping changes height and width. 2. Warped RoI are fed into CNN. I’d appreciate if anyone can shed some light here. Thanks.
Do you know the answer now?I have same question
Looking at this coming from NLP, NLP seems like so much easier where you just have a Transformer with a sequence classification/token classification head on top.. Here you have a very complex way of computing mAP, region proposals, non-maximum suppression procedure, anchor generation... Luckily, the introduction of DETR by Facebook AI (which replaces a lot of these handcrated features by a Transformer which learns everything end-to-end) seems really refreshing :)
too late now
He is a great lecturer!
Thank you very much for sharing these useful resources.
Why do the authors of the RCNN paper use a log scale transform to get the new scale factors for width ?
thank you for making available, amazing lec
Great lecture. Thanks a lot.
Surprisingly there's no mention of YOLO which makes RCNN family obsolete
Yeah!
Seems like teacher don't like Yolo. 2022Winter Lectures not even a word about yolo was mentioned
yes I'm curious about it too. Only a flash of yolo paper reference at 1:03:57
best lecture..i like..tq
Thank you Justin!!
Amazing lecture
I can't download the slides , is there any other way to get it ?
The resolution of these slides are quite high, so their size often exceed like 100 MB. Maybe the network is the main issue.
1:04:13 where is yolo :)
Does anyone have link to the 2020 version?
drive.google.com/drive/folders/1LXriM9h8WNJGErlYQXIrNNytAzVaHBjF?usp=sharing
thanks you very much
Is Johnson the guy in the Stanford University?
yessss
I wonder if mean average precision could be calculated faster while still incorporating the performance of the bounding boxes by simply factoring the detections by their IOU's and using the results instead of rerunning at many different thresholds and averaging.
For example, perfect Mean Average Precision would impossibly be the first detections all correctly identifying the detectable objects in the image, and the detections all had an IOU of 1.0. Essentially rather than calculating the area under a curve on a 2D plot with precision and recall and replotting many times at various thresholds. We would instead calculate a 3d volume, where a 2d plot of detections matched against a third dimension that represents the IOU (or some factored IOU if it's better).
It seems to me that that would achieve the same results more quickly and elegantly, if anyone knows more though I would love to hear about it!
37:10