7:10 The IoU is not the amount of overlap between the two boxes, it's "Intersection over Union", so the area of overlap / area of union, its the proportion, whereas the intersection alone is the overlap value.
1. F1 score is harmonic mean of precision and recall, and just not simply the result of their multiplication. 2. 9:20 you totally failed to clarify things. So what mAP is: a. is it average of AP at different IoUs of a single class b. or average of AP across different classes but then what happened to AP at different IoUs Overall it is informative. But would be better if you can just clarify things a bit more..
AP is calculated using a single IoU, as the mean of precisions achieved at each recall level (different detection thresholds). As AP is calculated for each class, mAP (mean average precision) is calculated as the mean value of average precisions. AP and mAP depend on the selected IoU, and are thus called by its IoU (mAP50, mAP75, etc.)
@@ankitmagan Confidence Value (confidence score) is the probability of the object present in a particular anchor box. Its mostly coming from the classifier. We are talking about IoU. Its overlap/union ratio between the predicted and ground truth(actual) bounding box that we have in our labelled dataset. We can calculate mAP when we have labelled test dataset and we predict boxes and compare how precise bounding boxes are generated with respect to ground truth boxes.
AP (of a single class) is caculated for a fixed IoU, right? Because a P-R value is dependent on confidence and IoU (two factors). By computing the P-R curve, only confidence is changed (IoU is fixed).
Серьёзные приложения нельзя делать на Python потому что он потому что он внутри себя использует числа с плавающей запятой. На Python можно только поиграться поучиться.
Thank you guys for all the hard work you do! And making available for free to all of us!
7:10
The IoU is not the amount of overlap between the two boxes, it's "Intersection over Union", so the area of overlap / area of union, its the proportion, whereas the intersection alone is the overlap value.
Great content!
1. F1 score is harmonic mean of precision and recall, and just not simply the result of their multiplication.
2. 9:20 you totally failed to clarify things. So what mAP is:
a. is it average of AP at different IoUs of a single class
b. or average of AP across different classes
but then what happened to AP at different IoUs
Overall it is informative. But would be better if you can just clarify things a bit more..
I found the same thing on their blog post. Doesn't actually answer the title of the video.
AP is calculated using a single IoU, as the mean of precisions achieved at each recall level (different detection thresholds).
As AP is calculated for each class, mAP (mean average precision) is calculated as the mean value of average precisions.
AP and mAP depend on the selected IoU, and are thus called by its IoU (mAP50, mAP75, etc.)
@@alejandromarceloproiettian5079 You mention different detection thresholds. Is this the confidence value that the model outputs?
@@ankitmagan Confidence Value (confidence score) is the probability of the object present in a particular anchor box. Its mostly coming from the classifier.
We are talking about IoU. Its overlap/union ratio between the predicted and ground truth(actual) bounding box that we have in our labelled dataset. We can calculate mAP when we have labelled test dataset and we predict boxes and compare how precise bounding boxes are generated with respect to ground truth boxes.
Best video in this topic
What is that plot with confidence as y-axis at 4.18 its super confusing
Thanks ALL FOR instruction
Nice work. Thanks!
Amazingly explained!
Glad it was helpful!
AP (of a single class) is caculated for a fixed IoU, right? Because a P-R value is dependent on confidence and IoU (two factors). By computing the P-R curve, only confidence is changed (IoU is fixed).
Thank you very much
You welcome 🙏🏻
Can I get the link to the paper that introduced mAP?
Can I get the code to calculate them?
use tensorflow for that
@@legohistory isn't calculated directly inside google colab algorithm folders?
@@abbasalsiweedi9019 I do not understand. What do you mean?
Серьёзные приложения нельзя делать на Python потому что он потому что он внутри себя использует числа с плавающей запятой. На Python можно только поиграться поучиться.