The research does not explain any optimization strategies of tuning these parameters, so you have to assume it’s some mixture of intuition and trial and error. I would be interested in applying some evolutionary algorithm to find the best parameter set and see if you can push DINO performance.
Hi, great video. Had a tangent question, I am trying to use the base pretrained model of DINOV2 from huggingface on the broad institutes BBBC021 dataset of MCF7 breast cancer cells and I'm finding that the CLS embeddings when clustered don't align with the labels (MoA's) in the dataset... Given your experience with DINO, do you think this is due to the cropping strategy used in the pretrained model, and I would have to retrain a bare-bones DINOv2 model on millions of microscopy images to achieve the task of classification correctly? Thanks for any help!
Thank you for wrapping up the code and explanation, does your code support multi node implementation? and is there any difference between your notebook and DinoV2 code?
My intuition is that it would work but not as well as the transformers. Transformers are slow and computationally expensive but they hold information in a way that CNNs cannot. Probably better off distilling down to a CNN from a transformer.
Really good explanation. Would love to see you make more videos. You're very clear and the visual content you present is easily digestible
Thank you! Started at a start-up and it has eaten my time lol
@@aiape6954 Start up grind ain’t no joke fr
Awesome explanation!
Thanks a lot for this video!
How is it different than DINO itself? I wish there's more explanation.
Really easy to understand! Thanks!
Amazing work! I really want to know how to decide the cropping parameters based on different datasets. Is it completely based on experience?
The research does not explain any optimization strategies of tuning these parameters, so you have to assume it’s some mixture of intuition and trial and error. I would be interested in applying some evolutionary algorithm to find the best parameter set and see if you can push DINO performance.
Thank you so much. It was clear and interesting. I have a question please, is it possible to modify the attention maps in this model?
Checkout this repo! I use it all the time.
github.com/ShirAmir/dino-vit-features/tree/main
well Explained!!
that's a great explanation. Are you planning to make a video on the Florence-2 model? I would love to see for livestock use case.
Hi, great video. Had a tangent question, I am trying to use the base pretrained model of DINOV2 from huggingface on the broad institutes BBBC021 dataset of MCF7 breast cancer cells and I'm finding that the CLS embeddings when clustered don't align with the labels (MoA's) in the dataset... Given your experience with DINO, do you think this is due to the cropping strategy used in the pretrained model, and I would have to retrain a bare-bones DINOv2 model on millions of microscopy images to achieve the task of classification correctly?
Thanks for any help!
Thank you for wrapping up the code and explanation, does your code support multi node implementation? and is there any difference between your notebook and DinoV2 code?
thank you!
Hello i have a paid project on DINO IBOT and DINOV2 will you help?
I wonder if you think DINOv2 could be applied to CNNs?
My intuition is that it would work but not as well as the transformers. Transformers are slow and computationally expensive but they hold information in a way that CNNs cannot. Probably better off distilling down to a CNN from a transformer.
Amazing explanation, but I think you are just explaining DINO instead of DINOv2.
Everything in this video applies to both. The process was optimized for DINOv2 but the structure remained the same.