Join My AI Career Program www.nicolai-nielsen.com/aicareer Enroll in the Investing Course outside the AI career program nicolai-nielsen-s-school.teachable.com/p/investment-course Camera Calibration Software and High Precision Calibration Boards camera-calibrator.com/
Great topic! I could potentially add some extra insights since I worked on an ultra-fast speed project involving complex-valued models, where I reimplemented the pruning module. The key point here is that, in theory, pruning should reduce speed (as mentioned in the lottery ticket hypothesis paper, for example). However, it only generates a binary mask based on certain criteria (like weight magnitude), and the pruning process involves an element-wise product between the weights and the binary mask, so the zeros are still present.While the model becomes sparse, it could be interesting for sparse storage by using efficient memory layouts like COO, CSR, BSR, etc. (see torch.sparse). I’ve forgotten some details, so I’ll double-check and provide more in-depth feedback later.
Hi thanks for the pruning tutorial. But when I run the code instead of reducing tge size of the model it increased the size from 21 mb to 43 mb. Could you please provide any code how i can reduced the size
Check the weights and bias buffers() because technically you will have a bit mask Of weights and bias so review this with model.state_dict().keys() and check if you have .weight_mask() somewhere or take a specific layer module where you have done the pruning and do . named_buffers()
Join My AI Career Program
www.nicolai-nielsen.com/aicareer
Enroll in the Investing Course outside the AI career program
nicolai-nielsen-s-school.teachable.com/p/investment-course
Camera Calibration Software and High Precision Calibration Boards
camera-calibrator.com/
Great topic! I could potentially add some extra insights since I worked on an ultra-fast speed project involving complex-valued models, where I reimplemented the pruning module. The key point here is that, in theory, pruning should reduce speed (as mentioned in the lottery ticket hypothesis paper, for example). However, it only generates a binary mask based on certain criteria (like weight magnitude), and the pruning process involves an element-wise product between the weights and the binary mask, so the zeros are still present.While the model becomes sparse, it could be interesting for sparse storage by using efficient memory layouts like COO, CSR, BSR, etc. (see torch.sparse). I’ve forgotten some details, so I’ll double-check and provide more in-depth feedback later.
Very useful video :}
Thanks a lot! Glad u liked it
Hi thanks for the pruning tutorial. But when I run the code instead of reducing tge size of the model it increased the size from 21 mb to 43 mb. Could you please provide any code how i can reduced the size
facing the same problem. the goal is to reduce the size of the model after successful pruning. but getting the opposite result
Check the weights and bias buffers() because technically you will have a bit mask Of weights and bias so review this with model.state_dict().keys() and check if you have .weight_mask() somewhere or take a specific layer module where you have done the pruning and do . named_buffers()