Computer Vision Talks
Computer Vision Talks
  • 57
  • 46 645
Computationally Budgeted Continual Learning | Ameya Prabhu PhD@Torr Vision Group, UniversityOfOxford
Paper Abstract:
This talk offers a fresh perspective on continual learning, presenting an innovative approach where computational constraints take precedence over memory limitations. By attending this event, you'll gain insights into a unique setup that challenges conventional paradigms and opens doors to cost-effective knowledge accumulation.
Agenda:
- Why Get Excited: Discover the significance of this unconventional approach and its potential impact on the world of continual learning.
- Progress Report: Explore the latest advancements made in the realm of computationally budgeted continual learning.
- Lessons Learned: Ameya Prabhu will share personal insights from his journey, shedding light on discoveries, obstacles overcome, and breakthroughs achieved.
- Missing Pieces: Dive into the unexplored aspects that still need attention to complete the puzzle in this evolving field.
Key Takeaways:
- 🎯 Optimized Knowledge Accumulation: Learn how to accumulate knowledge efficiently while being mindful of computational costs.
- 🛠️ Effective Strategies: Discover techniques that excel in this unique setup and understand those that might fall short.
- 📊 Real-World Challenges: Gain awareness of challenges posed by real-world data streams, such as near-duplicates and correlated datastreams, and their impact on algorithms.
- 🔄 Adaptive Deep Models: Explore the journey towards creating continual data structures that augment distributionally robust deep models.
About the Speaker:
Ameya Prabhu is a PhD Candidate at the Torr Vision Group, University of Oxford. He has dedicated his academic journey to optimizing neural networks for efficiency and has been an active contributor to the field of continual learning since 2020.
References from the video :
Kilian Weinberger talk : ua-cam.com/video/kY2NHSKBi10/v-deo.html
RDumb paper : arxiv.org/abs/2306.05401
Переглядів: 291

Відео

Learning Object Recognition with Rich Language Descriptions | Liunian Li, PhD@UCLA
Переглядів 1729 місяців тому
📜 Abstract: Language-based visual recognition models have come a long way, but there's still much to explore! In this talk, Liunian Harold Li will present groundbreaking research on leveraging rich and comprehensive language queries, including attributes, shapes, textures, and relations, to enhance object recognition models. Prepare to be amazed as he shares how DesCo improves zero-shot detecti...
Memory-Economic Continual Test-TimeAdaptation| Junyuan Hong, PhD @MichiganStateUni, Intern@SonyAI
Переглядів 259Рік тому
Paper Abstract: New Problem: We initiate the study on the memory efficiency of continual test-time adaptation (CTA), revealing the substantial obstacle in practice. New Method: We propose a novel method with a simple plug-in MECTA Norm layer that improves the memory efficiency of different CTA methods. - The norm layer also enables us to stop and restart model adaptation without unused or absen...
On the Impact of Estimating Example Difficulty with Chirag Agarwal, Research Scientist@Adobe
Переглядів 184Рік тому
In this talk we discuss the paper 'Estimating Example Difficulty using Variance of Gradients' accepted at CVPR 2022. Speaker Bio Chirag is a Research Scientist at Adobe Media and Data Science Research Lab and a research affiliate at Harvard University. His research interest includes developing trustworthy machine learning that goes beyond training models for specific downstream tasks and ensuri...
SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models
Переглядів 159Рік тому
SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models accepted at BMVC 2022 Abstract Vision-language models such as CLIP are pretrained on large volumes of internet image and text pairs, and have been shown to sometimes exhibit impressive zero- and low-shot image classification performance. However, due to their size, fine-tuning these models on new datasets can be prohibit...
Master of All : Simultaneous Generalization of Urban-Scene Segmentation | ECCV 2022
Переглядів 169Рік тому
Paper Abstract: Computer vision systems for autonomous navigation must generalize well in adverse weather and illumination conditions expected in the real world. However, semantic segmentation of images captured in such conditions remains a challenging task for current state-of-the-art (SOTA) methods trained on broad daylight images, due to the associated distribution shift. To remedy this, we ...
Contrastive Test-Time Adaptation @CVPR22 | Dian Chen @ToyotaResearchInstitute
Переглядів 820Рік тому
Paper : arxiv.org/abs/2204.10377 Dian is a researcher from the Machine Learning team at Toyota Research Institute, and previously a researcher at Prof. Trevor Darrell's lab at University of California, Berkeley. She received her master's degree in Robotics in 2019 at University of Pennsylvania, advised by Prof. Kostas Daniilidis. Dian is actively working on 3D perception and domain adaptation. ...
Spatio-temporal Relation Modeling for Few-shot Action Recognition | CVPR 2022
Переглядів 403Рік тому
Paper Abstract: Novel Few-shot action recognition framework, STRM, is proposed for learning higher-order temporal representations. Aggregate spatial and temporal contexts with dedicated local patch-level and global frame-level feature enrichment sub-modules. Propose a query-class similarity classifier on the patch-level enriched features to enhance class-specific feature discriminability by rei...
Margin-based Label Smoothing for Network Calibration, CVPR 2022 | Bingyuan Liu
Переглядів 262Рік тому
Abstract In spite of the dominant performances of deep neural networks, recent works have shown that they are poorly calibrated, resulting in over-confident predictions. Miscalibration can be exacerbated by overfitting due to the minimization of the cross-entropy during training, as it promotes the predicted softmax probabilities to match the one-hot label assignments. This yields a pre-softmax...
Role Of Shannon Entropy As A Regularizer Of DeepNNs With Prof Jose Dolz @ETS Montreal
Переглядів 210Рік тому
Abstract: With the advent of deep learning models a variety of additional terms have been integrated into the main learning objective, which typically serve as a regularizer of the model predictions. This is the case, for example, of the Shannon entropy, which has been widely used in semi-supervised learning to penalize high-entropy predictions, and therefore encourage confident predictions on ...
Self-Supervising Occlusions For Vision | Dinesh Reddy,PhD @CMU Robotics
Переглядів 2262 роки тому
Abstract : Virtually every scene has occlusions. Understanding and dealing with occlusions is hard due to the large variation in the type, number, and extent of occlusions possible in scenes. In this talk, we propose developing computer vision algorithms robust to occlusions using self-supervision. We propose two methodologies for learning such occlusions for data captured in the wild. The firs...
Mathematical Models of Brain Connectivity and Behavior | Niharika S. D’Souza @IBM Research, Almaden
Переглядів 1752 роки тому
Abstract: The study of networks is very relevant to modern day data-science, as we gain a lot of insight into otherwise mysterious phenomena. One such complex network is the human brain. Recently, there has been a lot of interest in understanding how regions in the brain communicate with each other and how these communication patterns influence our behavior and health. This sets us up for an im...
Mixture-Based Feature Space Learning for Few-Shot Classification | ICCV 2021| Arman Afrasiyabi @MILA
Переглядів 5542 роки тому
You can read the paper here - lnkd.in/ddHKQ4v5 Checkout the code - github.com/ArmanAfrasiyabi/MixtFSL-fs Abstract : We introduce Mixture-based Feature Space Learning (MixtFSL) for obtaining a rich and robust feature representation in the context of few-shot image classification. Previous works have proposed to model each base class either with a single point or with a mixture model by relying o...
Generalized and Incremental Few-Shot Learning by Explicit Learning & Calibration without Forgetting
Переглядів 4252 роки тому
Link to the Paper - arxiv.org/pdf/2108.08165.pdf 1) What is few-shot learning? 2) What is generalized few-shot learning? 3) What are the difficulties? 4) Our framework to address these difficulties? 5) Extension to incremental learning? Paper abstract: Both generalized and incremental few-shot learning have to deal with three major challenges: learning novel classes from only a few samples per ...
Discriminative Region-based Multi-Label Zero-Shot Learning [ICCV 2021] Akshita Gupta @IIAI
Переглядів 3762 роки тому
Paper Abstract: Multi-label zero-shot learning (ZSL) is a more realistic counter-part of standard single-label ZSL since several objects can co-exist in a natural image. However, the occurrence of multiple objects complicates the reasoning and requires region-specific processing of visual features to preserve their contextual cues. We note that the best existing multi-label ZSL method takes a s...
PAWS : Semi-Supervised Learning of Visual Features
Переглядів 8812 роки тому
PAWS : Semi-Supervised Learning of Visual Features
Exploring Explainable AI : Differential Diagnosis of Benign Breast Lesions
Переглядів 2352 роки тому
Exploring Explainable AI : Differential Diagnosis of Benign Breast Lesions
Using Progressive Context Encoders for Anomaly Detection | AI in Healthcare | Generative Modeling
Переглядів 4912 роки тому
Using Progressive Context Encoders for Anomaly Detection | AI in Healthcare | Generative Modeling
What Can We Learn From Subtitled Sign Language Data? Gül Varol, Asst. Prof@École des Ponts ParisTech
Переглядів 5402 роки тому
What Can We Learn From Subtitled Sign Language Data? Gül Varol, Asst. Prof@École des Ponts ParisTech
ViTGAN : Training GANs with Vision Transformers | Paper Discussion with the Author
Переглядів 1,7 тис.2 роки тому
ViTGAN : Training GANs with Vision Transformers | Paper Discussion with the Author
Federated Learning in Vision Tasks | Umberto Michieli, PhD@Uni of Padova, Intern@Samsung Research
Переглядів 6762 роки тому
Federated Learning in Vision Tasks | Umberto Michieli, PhD@Uni of Padova, Intern@Samsung Research
PLOP : Learning continuously without forgetting for Continual SemSeg | CVPR2021 | Arthur Douillard
Переглядів 5562 роки тому
PLOP : Learning continuously without forgetting for Continual SemSeg | CVPR2021 | Arthur Douillard
An Identifiability Perspective on Representation Learning | Yash Sharma, PhD@(MPI-​IS)
Переглядів 3642 роки тому
An Identifiability Perspective on Representation Learning | Yash Sharma, PhD@(MPI-​IS)
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams | Matthias De Lange
Переглядів 5242 роки тому
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams | Matthias De Lange
SeqNet: Learning Descriptors for Hierarchical Place Recognition | Sourav Garg, PostDoc @QUT
Переглядів 2673 роки тому
SeqNet: Learning Descriptors for Hierarchical Place Recognition | Sourav Garg, PostDoc @QUT
Scale Equivariant Siamese Tracking | WACV 2021 | Ivan Sosnovik & Artem Moskalev
Переглядів 4333 роки тому
Scale Equivariant Siamese Tracking | WACV 2021 | Ivan Sosnovik & Artem Moskalev
Conformal Inference of Counterfactuals and Individual Treatment effects(Stanford)| Lihua Lei
Переглядів 1,2 тис.3 роки тому
Conformal Inference of Counterfactuals and Individual Treatment effects(Stanford)| Lihua Lei
Self-Supervised Few-Shot Learning on Point Clouds(NeurIPS 2020) | Charu Sharma
Переглядів 5783 роки тому
Self-Supervised Few-Shot Learning on Point Clouds(NeurIPS 2020) | Charu Sharma
Unsupervised Domain Adaptation for Semantic Segmentation of NIR Images | Spotlight@ECCV2020
Переглядів 6073 роки тому
Unsupervised Domain Adaptation for Semantic Segmentation of NIR Images | Spotlight@ECCV2020
GOCor (NeurIPS 2020) | Prune Truong & Martin Danelljan
Переглядів 4213 роки тому
GOCor (NeurIPS 2020) | Prune Truong & Martin Danelljan

КОМЕНТАРІ

  • @paedrufernando2351
    @paedrufernando2351 25 днів тому

    the brownies keep distrubing a lot

  • @entertain7
    @entertain7 4 місяці тому

    I was working on the same project, can you assist me?

    • @AamirMaqsood-sn7gq
      @AamirMaqsood-sn7gq 8 днів тому

      hi,i'm working on the same project,cau you reply my message

  • @rrverma6391
    @rrverma6391 4 місяці тому

    thanks

  • @user-pl6ur3mn7z
    @user-pl6ur3mn7z 11 місяців тому

    please share the slides

  • @armani2752
    @armani2752 Рік тому

    P r o m o S M 😂

  • @kushalkushal8458
    @kushalkushal8458 Рік тому

    Please provide the github code

  • @kunaldargan4467
    @kunaldargan4467 Рік тому

    Excellent presentation

  • @juliussilaa8998
    @juliussilaa8998 Рік тому

    No body ever described Representation Learning with such simplicity and clarity

  • @harunmwangi8135
    @harunmwangi8135 Рік тому

    Nice work 💯

  • @akhildev312
    @akhildev312 Рік тому

    Can you please share the slides asap? The link is broken

  • @vinayaka.b1494
    @vinayaka.b1494 Рік тому

    great lecture

  • @estherlisiane9786
    @estherlisiane9786 Рік тому

    great work, do you think this can be adapted for longer activities that combine these actions? like say pick up the cup, then pour water, then drink, and then pass it. All as one activity (say drinking water) among others? like by using the input pose of touching the cup you predict drinking water?

  • @akashvermaietlucknowstuden4721

    Superb Presentation.

  • @kevaldholu7366
    @kevaldholu7366 Рік тому

    All the examples are from indoor dataset. How about autonomous driving datasets like KITTI or Nuscenes oe A2D2? Is it possible to create a map of outdoor environment? (Consider using high resolution camera)

  • @RyanMartinRAM
    @RyanMartinRAM 2 роки тому

    Hi, thanks for this video and a quick question. What is the format of the ImageNet support labels? Are they labeled as polygonal objects or simply given a name via the folder hierarchy?

  • @siyaoli6163
    @siyaoli6163 2 роки тому

    Amazing algorithms!

  • @abhaagarwal6922
    @abhaagarwal6922 2 роки тому

    Congrats Akshita

  • @amitgupta-zu9cs
    @amitgupta-zu9cs 2 роки тому

    Excellent work done in the field of point cloud. Keep it up.

  • @baseldbwan
    @baseldbwan 2 роки тому

    V. Good paper How can I get the continue dataset Baseldbwan in yahoo e mail

  • @HemakumarK
    @HemakumarK 2 роки тому

    Hi, You mentioned in your video will share the github link. Couldn't able to find it. Plz, share it.

    • @TheHVu-mo4hm
      @TheHVu-mo4hm Рік тому

      github.com/kayoyin

    • @kaizen_4603
      @kaizen_4603 6 місяців тому

      Yes, I would like to have it too. Pretty please

  • @rishabhsahlot7481
    @rishabhsahlot7481 2 роки тому

    How are you taking 2 inputs(Encoder based features & Mesh grid of patch coordinates ) for fourier feature networks? I mean how are you combining them?

    • @emiliomorales2843
      @emiliomorales2843 2 роки тому

      In the paper last page says "The 2-layer MLP takes positional embedding E fou as its input, and it is conditioned on patch embedding yi via weight modulation as in [27][1]" But still don´t undestand how the modulation is done, CIPS code is quite different. Another option could be just concatenate or sum mesh grid cordinates and the transformer features.

    • @rishabhsahlot7481
      @rishabhsahlot7481 2 роки тому

      @@emiliomorales2843 yeah I got it uses the weight demodulation of style Gan2, [27] points to style Gan2 ,this seems more credible to use

    • @emiliomorales2843
      @emiliomorales2843 2 роки тому

      @Rishabh Sahlot Fourier embeddings modulated by the transformer features? Or vice versa?But ISN the most hard to implement, seems that works ok in all layers except in the transformer Feed Forward block.

    • @rishabhsahlot7481
      @rishabhsahlot7481 2 роки тому

      @@emiliomorales2843 former, I think

    • @computervisiontalks4659
      @computervisiontalks4659 2 роки тому

      @@rishabhsahlot7481 Please reach out to the authors on mail/linkedin, they are usually very responsive. :)

  • @shambhoolalpurohit2588
    @shambhoolalpurohit2588 2 роки тому

    Nice Sidharth Keep it up

  • @prabhjeetsinghcheema9218
    @prabhjeetsinghcheema9218 2 роки тому

    thanks for making this video. This helped me a lot. Keep posting more of research-based discussions.

    • @computervisiontalks4659
      @computervisiontalks4659 2 роки тому

      Join us this Saturday, 21st August at 7PM IST to ask all about grad school applications!

  • @user-rt7tu7dk4q
    @user-rt7tu7dk4q 2 роки тому

    Good talks, thanks

  • @Janamejaya.Channegowda
    @Janamejaya.Channegowda 2 роки тому

    Thank you for sharing the presentation, very useful, I was looking for resources related to continual learning, keep up the great work.

  • @isaackay5887
    @isaackay5887 3 роки тому

    How would someone like me be able to get involved with this project? I've been working on various smaller projects to build up to this, but y'all have a very good framework already. I've taken a few ASL courses at university and have continued to sign since. I would love to help contribute my knowledge and time to this

  • @arjunashok4956
    @arjunashok4956 3 роки тому

    The name of the speaker is wrong in the description. Please correct it. Thanks for this great talk! I hope to see more ML-based talks!

  • @divyanshgarg9584
    @divyanshgarg9584 3 роки тому

    my name is also divyansh garg same bro

  • @dukebuaa443
    @dukebuaa443 3 роки тому

    Would you please share the ppt used in the talk?

    • @shambhavimishra2802
      @shambhavimishra2802 3 роки тому

      Sure, find it in our GitHub Repository.

    • @fabian0605
      @fabian0605 2 роки тому

      @@shambhavimishra2802 I can't find it there unfortunately. Could you provide a link?

    • @linwang3917
      @linwang3917 2 роки тому

      @@fabian0605 code link always resides inside paper,always last sentence of the abstract

  • @pithwdpswiras6763
    @pithwdpswiras6763 3 роки тому

    Hey there! This is so informative! I have been trying to look for a video that teaches everything in this UA-cam video. 🙌 The part at 1:08 is my fav. Your lesson is like the videos from Dr Ethan! Dr's videos are totally educational and he actually helped me on my midterms. He is the most informative med student in the UK and he explains mental health and medical school. I recommend you watch his YT out and give the doc a like over here! 👉 #DrEthanTips

  • @ousheshharadhun3773
    @ousheshharadhun3773 3 роки тому

    Great job Krishna. Do you know what is the time complexity of the Grad slam?

  • @anusuiyatiwari1800
    @anusuiyatiwari1800 3 роки тому

    Can I run this code in Google colab

  • @TrungNguyen-ty3qi
    @TrungNguyen-ty3qi 3 роки тому

    Thanks for great talk! Could you public the slides?

  • @ahmed-nm2bl
    @ahmed-nm2bl 3 роки тому

    Hello Dears; Please, can I get the file of presentation? Thank you in advance.

  • @shivenkhajuria2488
    @shivenkhajuria2488 3 роки тому

    Is ART the most efficient method to handle Weakly Supervised Object Localization problem?

  • @randalgomez4522
    @randalgomez4522 3 роки тому

    Hey there!! Great content! Why don’t you use SMZeus . c o m to get your video higher in the search?

  • @AkshatSurolia
    @AkshatSurolia 3 роки тому

    Great work, really helpful!

  • @miriamsaucedo7137
    @miriamsaucedo7137 3 роки тому

    Somebody knows if this approach could be applied to a multi-camera system?

    • @computervisiontalks4659
      @computervisiontalks4659 3 роки тому

      Please try to reach out to the speaker on twitter and ask this question. @JonathonLuiten

  • @jyothiswaroopsantu
    @jyothiswaroopsantu 3 роки тому

    Excellent Talk by Krishnamurthy Sir.

  • @user-qf3lu1qp9y
    @user-qf3lu1qp9y 3 роки тому

    0:20 WANTED Writers Willing to Get Paid $200 per Post codz2019.blogspot.com/2020/08/auto-profit.html

  • @sahanaprabhu5013
    @sahanaprabhu5013 3 роки тому

    Informative talk

  • @superaluis
    @superaluis 3 роки тому

    Very interesting research and extremely clear presentation! The experiments were very insightful. Awesome work, Pramod!