Pruning a neural Network for faster training times
Вставка
- Опубліковано 20 лис 2022
- Neural Networks and neural network based architecturres are powerful models that can deal with abstract problems but they are known for taking a long time to train. In this video, we learn a new technique: pruning neural networks.
Previous lesson: • How to select the corr...
Next lesson: • How to Use Learning Ra...
📙 Here is a lesson notes booklet that summarizes everything you learn in this course in diagrams and visualizations. You can get it here 👉 misraturp.gumroad.com/l/fdl
👩💻 You can get access to all the code I develop in this course here: github.com/misraturp/Deep-lea...
❓To get the most out of the course, don't forget to answer the end of module questions:
fishy-dessert-4fc.notion.site...
👉 You can find the answers here:
fishy-dessert-4fc.notion.site...
RESOURCES:
🏃♀️ Data Science Kick-starter mini-course: www.misraturp.com/courses/dat...
🐼 Pandas cheat sheet: misraturp.gumroad.com/l/pandascs
📥 Streamlit template (updated in 2023, now for $5): misraturp.gumroad.com/l/stemp
📝 NNs hyperparameters cheat sheet: www.misraturp.com/nn-hyperpar...
📙 Fundamentals of Deep Learning in 25 pages: misraturp.gumroad.com/l/fdl
COURSES:
👩💻 Hands-on Data Science: Complete your first portfolio project: www.misraturp.com/hods
🌎 Website - misraturp.com/
🐥 Twitter - / misraturp - Наука та технологія
Only pruning the network doesn't actually speed up the network.
It actually slows down the inference latency.
(cause most AI frameworks we use doesn't support sparse matrix calculation)
The most important thing is to install sparsity calculation supported kernel in your system.
Can pruning be done after completing the training ?
For example, I have a pre-trained model.
Using PTQ (Post Training Quantization), we Quantize the model, similarly can we do Post Training Pruning also ?
Thanks but too short.
is sth missing