Few-Shot Learning & Meta-Learning in 💯 lines of PyTorch code | MAML algorithm
Вставка
- Опубліковано 26 тра 2023
- Machine Learning: Implementation of the paper "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks" in 100 lines of PyTorch code.
Link to the paper: arxiv.org/abs/1703.03400
GitHub: github.com/MaximeVandegar/Pap...
-----------------------------------------------------------------------------------------------------
CONTACT: papers.100.lines@gmail.com
#python #pytorch #maml #neuralnetworks #machinelearning #artificialintelligence #deeplearning #data #bigdata #supervisedlearning #research #metalearning #reptile #fewshotlearning #learning #fewshot - Наука та технологія
looks like you really like to reinvent the wheels... like was that intentional or this really cannot be implemented in a straightforward way? also in your github I saw the code no one committed a better version... may I try to write a better one and raise a PR?
Hi, I am always open to improvements. If you think you can improve the code, please do not hesitate to make a PR
Is this first order maml or second order maml?
Excuse me for the delayed answer. This is second order because we compute the second order gradients.
@@papersin100linesofcode hey, thanks for the video. Where exactly are the second order gradients calculated? Wouldn't this necessitate create_graph=True in the inner_loop ?
@@thomaswohrle1623 thank you for your question. It is computed on line 75 where the loss depends on theta_prime. For create_graph=True, this is a great question that I would need to investigate.
Excellent explanation, thanks a ton!