How to Prune YOLOv8 and Any PyTorch Model to Make It Faster

Поділитися
Вставка
  • Опубліковано 12 вер 2024

КОМЕНТАРІ • 7

  • @NicolaiAI
    @NicolaiAI  Місяць тому +2

    Join My AI Career Program
    www.nicolai-nielsen.com/aicareer
    Enroll in the Investing Course outside the AI career program
    nicolai-nielsen-s-school.teachable.com/p/investment-course
    Camera Calibration Software and High Precision Calibration Boards
    camera-calibrator.com/

  • @anasssofti9271
    @anasssofti9271 5 днів тому

    Great topic! I could potentially add some extra insights since I worked on an ultra-fast speed project involving complex-valued models, where I reimplemented the pruning module. The key point here is that, in theory, pruning should reduce speed (as mentioned in the lottery ticket hypothesis paper, for example). However, it only generates a binary mask based on certain criteria (like weight magnitude), and the pruning process involves an element-wise product between the weights and the binary mask, so the zeros are still present.While the model becomes sparse, it could be interesting for sparse storage by using efficient memory layouts like COO, CSR, BSR, etc. (see torch.sparse). I’ve forgotten some details, so I’ll double-check and provide more in-depth feedback later.

  • @flychuban9896
    @flychuban9896 4 дні тому +1

    Very useful video :}

    • @NicolaiAI
      @NicolaiAI  4 дні тому

      Thanks a lot! Glad u liked it

  • @taufiqahmed2738
    @taufiqahmed2738 18 днів тому +1

    Hi thanks for the pruning tutorial. But when I run the code instead of reducing tge size of the model it increased the size from 21 mb to 43 mb. Could you please provide any code how i can reduced the size

    • @zareen6856
      @zareen6856 12 днів тому

      facing the same problem. the goal is to reduce the size of the model after successful pruning. but getting the opposite result

    • @anasssofti9271
      @anasssofti9271 5 днів тому

      Check the weights and bias buffers() because technically you will have a bit mask Of weights and bias so review this with model.state_dict().keys() and check if you have .weight_mask() somewhere or take a specific layer module where you have done the pruning and do . named_buffers()