Deep Learning With PyTorch - Full Course

Поділитися
Вставка
  • Опубліковано 27 сер 2024

КОМЕНТАРІ • 463

  • @patloeber
    @patloeber  3 роки тому +109

    I hope you enjoy the course :)
    And check out Tabnine, the FREE AI-powered code completion tool that helps you to code faster: www.tabnine.com/?.com&PythonEngineer *
    ----------------------------------------------------------------------------------------------------------
    * This is a sponsored link. You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

    • @sepgorut2492
      @sepgorut2492 3 роки тому

      at 37:00 I found after adding 2 that not all members of the tensor had exactly x+2. I tried this several times with always one of the parts of the tensor had less than x+2. Then at 37:16 you also had an anomaly. Why is this?

    • @user-if5cw5mo9x
      @user-if5cw5mo9x 2 роки тому

      Thank you very much. You did a great work!

    • @user-df2rq8fb9h
      @user-df2rq8fb9h Рік тому

      .👆Never love anyone who treats you like you’re ordinary.

    • @maranata693
      @maranata693 9 місяців тому

      great video! thank you but please don't delete each line that you code! wait till the subject is finished then delete them once

  • @straighter7032
    @straighter7032 Рік тому +35

    Incredible tutorial, thank you! Some corrections:
    - 1:12:02 correct gradient function in the manual gradient calculation should be `np.dot(2*x, y_predicted - y) / len(x)`, because np.dot results in a scalar and mean() has no effect of calculating the mean. (TY @Arman Seyed-Ahmadi)
    - 1:23:52 the optimizer is applying the gradient exactly like we do, there is no difference. The reason the PyTorch model has different predictions is because 1) you use a model with a bias, 2) the values are initialized randomly. To turn off the bias use `bias=False` in the model construction. To initialize the weight to zero use a `with torch.no_grad()` block and set `model.weight[0,0] = 0`. Then all versions result in the exact same model with the exact same predictions (as expected).

    • @Rojuvid
      @Rojuvid 10 місяців тому

      Thanks for this second comment! To add to this: nn.Linear wants to solve y = wx + b here. This 'b' is the bias, and by setting bias = False, instead it learns y = wx as we want it to. This also means that model.parameters() will yield only [w] and not [w, b] anymore, so do not forget to change that in line 52 in the video as well.

  • @armansa
    @armansa 2 роки тому +119

    This is a fantastic tutorial, thank you for sharing this great material!
    There is one mistake though that needs clarification:
    ==========================================
    At 1:12:02 it is mentioned that the code with automatic differentiation does not converge as fast because "back-propagation is not as exact as the numerical gradient". This is incorrect: the reason why the convergence of the two codes are different is because there is a mistake in the gradient() function. When the dot product np.dot(2x, y_pred_y) is performed, the result is a scalar and .mean() does not do anything. Instead of doing .mean(), np.dot(2x, y_pred_y) should simply be divided by len(x) to give the correct mean gradient. After doing this, both methods give the exact same convergence history and final results.

    • @reedasaeed4493
      @reedasaeed4493 2 роки тому +6

      I wishhhh saw your comment earlier. I was just going crazy that what am I doing wrong when calculating manually.

    • @sebula8001
      @sebula8001 2 роки тому +4

      Thanks for this comment, I was a bit concerned when he said that.

  • @DataProfessor
    @DataProfessor 3 роки тому +207

    Wow this is so cool Patrick, a free course on PyTorch, great value you are bringing to the community 😆

  • @sohamdas
    @sohamdas 3 роки тому +80

    This is one of the very few videos which is teaching Pytorch from the ground up! Beautiful work, @Python Engineer. Highly recommend it for any newbie + refresher.

  • @kamyararshi6235
    @kamyararshi6235 Рік тому +51

    Thanks for the course Patrick! It was a great refresher!
    BTW, at 3:42:02, in the newer versions instead of pretrained=True it is changed to weights=True.

  • @liorcole7307
    @liorcole7307 2 роки тому +135

    This is literally incredible. Perfect mix of theory and actual implementation. I can't thank you enough

  • @alexcampbell-black8543
    @alexcampbell-black8543 2 роки тому +27

    For the feedforward part, you need to send the model to the GPU when instantiating it:
    model = NeuralNet(input_size, hidden_size, num_classes).to(device)
    if your device is 'cuda' and you forget the '.to(device)' you will get an error.

    • @liorcole7307
      @liorcole7307 2 роки тому

      omg thank you so much for this. saved me hours trying to figure out what was wrong serious life savor

  • @ozysjahputera7669
    @ozysjahputera7669 2 роки тому +5

    I just completed the course on ML from scratch from Python Engineer. It was a great course for someone who learned all those algorithms in the past and wants to see how they get implemented using basic python lib and numpy.

  • @hom01
    @hom01 Рік тому +13

    The best Pytorch tutorial online, I love how you explained the concepts using simple example and built on each concept one step at a time

  • @victorpalacios1747
    @victorpalacios1747 3 роки тому +22

    This is probably one of the best tutorials I've ever seen for pytorch. Thank you so much.

    • @patloeber
      @patloeber  3 роки тому +3

      Thanks a lot! Glad you enjoy the course

  • @Vedranation
    @Vedranation 27 днів тому

    by FAR the best, most complete and comprehensible tutorial for pytorch I've come across

  • @shunnie8482
    @shunnie8482 3 роки тому +6

    Finally PyTorch doesnt seem as scary as it was before. The best tutorial I could find out there and I understood everything you've said. Thanks a lot.

    • @patloeber
      @patloeber  3 роки тому +3

      glad to hear that :)

  • @Barneymeatballs
    @Barneymeatballs 3 роки тому +5

    I don't even need to watch it to know its quality. Can't wait to watch it and thanks for uploading!

    • @patloeber
      @patloeber  3 роки тому

      Thanks! Hope you like it

  • @ciscoserrano
    @ciscoserrano 3 роки тому +11

    The man the myth the LEGEND returns with the best video of all time. 💪🏻
    GREAT JOB and THANK YOU! ❤️

  • @terryliu3635
    @terryliu3635 4 місяці тому

    The best hands-on tutorial on PyTorch on UA-cam! Thank you!

  • @yan-jieli3475
    @yan-jieli3475 2 роки тому +10

    On 4:14:00, I think you should use the ground truth as the labels rather than the predicted (line 130). Because the PR curve use the ground truth and predicted score to paint

  • @spkt1001
    @spkt1001 2 роки тому +21

    Thanks for the awesome course! The material is extremely well curated, every minute is pure gold. I particularly liked the fact that for each subject there is a smooth transition from numpy to torch. It's perfect for someone who wants a quick and thorough deeplearning recap and get comfortable with hands-on pytorch coding.

  • @ilkerbishop4217
    @ilkerbishop4217 3 роки тому +2

    Best pytorch video tutorial I have found on entire internet. Also the codes are published. Just awesome

  • @emrek1
    @emrek1 3 роки тому +9

    Thanks a lot for the low level explanations.
    At 1:01:47 when you dot product the array turns into a single scalar. So mean() returns that number(the sum), not average.
    When you fix it you get the exact same results as with pytorch's implementation in 1:12:00

    • @phi6934
      @phi6934 2 роки тому

      What is the correct expression of the gradient that gives the same result?

    • @emrek1
      @emrek1 2 роки тому +1

      @@phi6934 I don't remember the details right now, but just dividing the expression with the size of the tensor must do the work. In the expression put smt like .../len(x) instead of .mean()

    • @phi6934
      @phi6934 2 роки тому +1

      @@emrek1 yup that works thanks

    • @xaiver097zhang8
      @xaiver097zhang8 2 роки тому +2

      I found that problem too, Thanks bro!

  • @ChowderII
    @ChowderII 2 роки тому +3

    If you guys get an error on GPU at around 3:13:50, saying there is two devices, make sure you do model.to(device)

  • @uglybirds6965
    @uglybirds6965 2 роки тому +1

    ew, disgusting how good clean and free this course is and underappreciated this is.

  • @tanakanaoshi4769
    @tanakanaoshi4769 2 роки тому +1

    Basic operations we can do, so x and y equals torch. so let's print x and y. So we do simple addition for example

  • @user-wo7hn6sd3j
    @user-wo7hn6sd3j Рік тому +1

    This is the best course on this topic I've seen so far. It is perfect when you want to understand what you're doing and the way things are brought is very pedagogic.

  • @yoloswag6242
    @yoloswag6242 3 роки тому +54

    Came for pytorch, stayed for the accent!
    TENZSOoooOR 😎

  • @rickyyve9758
    @rickyyve9758 2 роки тому +3

    at 1:01:41 he uses np.dot and when it should be np.multiply, that will make it consistent with the pytorch implementation. By doing np.dot, the items are multiplied and summed leaving just one value to which the mean function is applied, so the reason the numpy version get to 0 loss quicker is the gradient is not being averaged correctly.

    • @patloeber
      @patloeber  2 роки тому

      thanks for pointing this out!

  • @iandanforth
    @iandanforth Рік тому

    In the Gradient Descent and Training Pipeline sections, the presenter glosses over why it takes 5x more training steps to converge. There are a couple factors:
    - Autograd is less aggressive than the manual gradient calculation, effectively lowering the learning rate (you can go all the way up to 0.1 after you move to torch and autograd)
    - nn.Linear() includes a bias by default and a non-zero initialization of the weights, making it not a direct comparison. You can get much closer by adding `bias=False` to the model initialization and by zeroing out the weigth with `model.weight.data.fill_(0.0`

  • @FreePal334
    @FreePal334 Рік тому +1

    OMG, you are an amazing teacher! Finally, I can grasp PyTorch and start building stuff. thank you so much

  • @leo.y.comprendo
    @leo.y.comprendo 2 роки тому +2

    When you explained backprop, I felt like I finally saw the light at an endless tunnel

    • @patloeber
      @patloeber  2 роки тому

      hehe, happy to hear that!

  • @user-vo7yv6wu1z
    @user-vo7yv6wu1z 3 роки тому +2

    the most useful video I have ever watched

  • @li-pingho1441
    @li-pingho1441 2 роки тому

    The best PyTorch tutorials I've ever watched.

  • @priyalakshmiprasad9726
    @priyalakshmiprasad9726 3 роки тому +1

    This UA-cam video is the best tutorial for pytorch out there.Thankyou so much!

  • @FaizanAliKhan-me9xj
    @FaizanAliKhan-me9xj Рік тому

    Dear with apologies kindly notice, At timestamp 1:12:05 make a correction in stating, that the backprop grad was not correct, Actually the numerical one was not correct. Because np.dot is computing a single number and then taking mean is the same number, instead use 2*x/4 in np.dot(2*x,(Y_pred-Y).mean()) to correct your numerical gradient. Using np.dot(2*x/4,(Y_pred-Y)) will produce same result as back propagated result. Mean will be usefull when W and X are matrices.
    Thank you

  • @genexu520
    @genexu520 3 роки тому +3

    Ten-soooor and Inter-ference are the best of the class!

  • @brydust
    @brydust 2 роки тому +2

    If z is a scalar then z.backward() is defined (and I understand the computation), while if z is not a scalar then z.backward() is not defined unless you provide appropriate inputs. However, it was not entirely clear to me what computation is occurring when we do z.backward(x) for example (where x is appropriate). This subject matter is around 33:00.

    • @HamzaRobotics
      @HamzaRobotics Рік тому +1

      Same happened with me

    • @abhishekmann
      @abhishekmann Рік тому +1

      What is happening is that PyTorch is assuming that you have provided the intermediate gradients i.e. (dLoss/dz), then using these intermediate gradients PyTorch is able to compute the gradients further downstream and backward step is successful.

  • @Marcos61783
    @Marcos61783 2 роки тому +7

    Your course is great! Congratulations!
    I just had to do a small correction in your code in part "13. Feed Forward Net" so that I could run it on GPU. It was necessary to add the "device" (that was preciously declared) as an argument in the nn.Linear function. Without this detail it is not possible to run the code in GPU.
    class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, n_classes, device):
    super(NeuralNet,self).__init__()
    self.l1 = nn.Linear(input_size, hidden_size, device=device)
    self.relu = nn.ReLU()
    self.l2 = nn.Linear(hidden_size, n_classes, device=device)
    def forward(self, x):
    out = self.l1(x)
    out = self.relu(out)
    out = self.l2(out)
    return out

  • @shatandv
    @shatandv 3 роки тому +2

    Patrick, you're a legend. Thank you so much for this tutorial. Now on to more advanced stuff!

  • @resoluation345
    @resoluation345 9 місяців тому

    This vid quality is ridiculously high, THANK YOU

  • @giovanniporcellato1171
    @giovanniporcellato1171 Рік тому

    Best tutorial on pytorch I've come across.

  • @qasimbashir1007
    @qasimbashir1007 2 роки тому

    41:01 Please change torch.optim.SGD(weights,lr=0.01) to torch.optim.SGD([weights],lr=0.01), here wights are passed as array

  • @danyalzia6958
    @danyalzia6958 3 роки тому

    One of the best PyTorch tutorial series on UA-cam :)

  • @Oof_the_gamer
    @Oof_the_gamer 2 дні тому

    1:24:05 this is the correct variables: rate = 0.034 # learning rate
    number_iterations = 769

  • @thechrism2249
    @thechrism2249 Рік тому +7

    This is amazing! It was fun to follow along and I feel like I am able to try pytorch on some projects now. Thank you 😍

  • @jiecao9825
    @jiecao9825 2 роки тому

    Thank you Python Engineer! This is the best tutorial video I've ever seen about pytorch.

  • @aberry24
    @aberry24 2 роки тому +5

    Nice tutorial !
    @1:11:40 at line # 37. Instead of using "w -= learning_rate * w.grad" , I used expanded form "w = w - learning_rate * w.grad" and thought it would be same. But in this case 'w.grad' return 'None'. w.require_grad is False and hence error.
    Though "w -= learning_rate * w.grad" is same as "w.data = w.data - learning_rate * w.grad".
    It seems torch Tensor ( with require_grad True) have some overridden "__iadd__" implementation.

    • @Darkspell1947
      @Darkspell1947 9 місяців тому

      unsupported operand type(s) for *: 'float' and 'builtin_function_or_method' got this error on that line. any help please

  • @haichen8132
    @haichen8132 Рік тому +1

    thank u for your patience!

  • @Hiyori___
    @Hiyori___ 8 місяців тому

    this video was super helpful and clear, I watched everything up until transfer learning, ty so much

  • @neotodsoltani5902
    @neotodsoltani5902 Рік тому

    a probable mistake: Leaky ReLU isn't used for solving the problem of vanishing gradient problem but Dead Neurons problem. Which can happen when you use ReLU activation functions.

  • @tljstewart
    @tljstewart 2 роки тому +3

    Update: Note a subtle detail, if in with torch.no_grad() you use w = instead of w -= a new w variable will be created with requires_grad = False, which is fixed by w.requires_grad = True
    Original: Using pytorch 1.11, and go figure @1:11 w.grad.zero_() errors, instead I had to put w.requires_grad = True

  • @jeffkirchoff14
    @jeffkirchoff14 2 роки тому

    Here's the best channel for data science and ML

  • @fatemehmirhakimi
    @fatemehmirhakimi Місяць тому

    Thankyou Patrick. It was a fantastic tutorial.

  • @xhinker
    @xhinker 2 роки тому

    This is the best Pytorch tutorial ever, thanks you!

  • @AliRashidi97
    @AliRashidi97 2 роки тому

    best pytorch tutorial ever

  • @xz3642
    @xz3642 2 роки тому

    This is the best tutorial on PyTorch

  • @johnyou5671
    @johnyou5671 7 місяців тому

    Thanks for this incredible resource. FYI I believe the gradient function computed at 1:01:38 is incorrect. I'm pretty sure it should be:
    def gradient(x, y, y_predicted):
    return ((y_predicted-y)*2*x).mean()

  • @doeskrippsayheyguyshowsitg578

    Wanna explore a package like pytorch? run print(dir(torch)) or any other package/module and you'll get an interesting printout of available functions.

  • @jonesen4395
    @jonesen4395 2 роки тому

    Thanks a lot, this tutorial helped me tremendously with my bachelors thesis

  • @sirnate9065
    @sirnate9065 Рік тому

    Someone has probably mentioned this already, but on line 23 at 1:04:08 .mean() is not doing anything since taking the dot product already returned a scalar. This is just dividing by one. Instead, you should be dividing by len(x) or len(y), or there may be another more efficient way to get the same result.

  • @zhaodaye3560
    @zhaodaye3560 2 роки тому

    1:12:09 It's because the gradient in your formula is not correct, not because pytorch's backpropogation calculation. You should put the ".mean()" into the brackets of "np.dot()".

  • @fatemehmirhakimi
    @fatemehmirhakimi Місяць тому

    Thankyou Patric for your Fantastic tutorial. ☺

  • @dansuniverse9642
    @dansuniverse9642 3 роки тому +1

    I have just finished the whole tutorial as a refresher. Everything is so much clearer now. Thanks.

  • @AustinSalgat
    @AustinSalgat 3 роки тому +4

    Excellent series. Using this to review what I've learned and to also learn PyTorch, thank you for this. The only thing I'd change is that you add an upward inflection to the end of most of your sentences which is a bit jarring (makes it sound like every sentence is a question).

  • @jennysun5777
    @jennysun5777 2 роки тому +3

    I've taken a graduate course in deep learning and neural, and have watched other tutorials here and there, but this is by far the most helpful one. Granted, all the previous materials have probably contributed, but the way you teach is unparalleled!

    • @patloeber
      @patloeber  2 роки тому

      thank you so much! glad you like it :)

  • @geezer2867
    @geezer2867 3 роки тому +1

    unbelievably excellent free tutorial course! Thank you!

    • @patloeber
      @patloeber  3 роки тому +1

      Glad it was helpful!

  • @wisdomtent
    @wisdomtent 3 роки тому +2

    This tutorial is supppppppppppper great! The best deep learning tutorial I've ever watched. Thank you so much.
    I enjoined the tutorial that I didn't want it to stop!
    I look forward to seeing more great videos like this from this channel

  • @NguyenHoang-wx4ym
    @NguyenHoang-wx4ym 2 роки тому

    I followed all courses and this helps me a lot. Thanks a ton

  • @TorontoWangii
    @TorontoWangii 2 роки тому

    Best course on pyTorch tutorial, thanks!

  • @goelnikhils
    @goelnikhils Рік тому

    Amazing and Comprehensive coverage of PyTorch. Amazing Video. Thanks a lot

  • @MR_AI_59
    @MR_AI_59 Рік тому

    basic explanation about autograd was great

  • @duynguyen4154
    @duynguyen4154 3 роки тому

    Such a clear and comprehensive tut for Pytorch!

  • @peddivarunkumar
    @peddivarunkumar 2 роки тому +1

    Perfect tutorial for a beginner!!!!!!!!

  • @austinleedavis
    @austinleedavis Рік тому

    Note at 2:08: `dataiter.next()` is no throwing an AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute 'next'... I changed that line to `data = next(dataiter)`

  • @datascience3008
    @datascience3008 Рік тому

    This is an error I have found
    Time: 1:01:55
    According to the equation,we actually need to find 1/N ,where N represents the number of term(here 4).According to the code,we are computing mean after converting the rest of the code to a dot product,which contains just a value.So instead of dividing with the desired value(4),we are dividing with 1.

  • @schlingelgen
    @schlingelgen Рік тому

    2:59:00 -> Starting with PyTorch 1.13 examples.next() is no longer valid.
    New syntax is: next(examples)

  • @smooth7041
    @smooth7041 5 місяців тому

    Really nice, well explained, well tested, etc.. Thanks a lot!!

  • @HeadshotComing
    @HeadshotComing 2 роки тому

    Man this is pure gold, thank you so much!

  • @xhinker
    @xhinker 2 роки тому

    I finished the whole video, again, thank you so much!

  • @tschalky
    @tschalky 2 роки тому

    Absoulte top quality videos! Thank you very much and may you go on forever

  • @nvsabhishek7356
    @nvsabhishek7356 2 роки тому

    Thank you very much! literally the best place to learn pytorch

  • @furia151
    @furia151 Рік тому

    amazing tutorial man! thank you so much !!! this is just the best!

  • @cerioscha
    @cerioscha Рік тому

    Gold dust !, thanks for sharing this.

  • @zechenzhang5891
    @zechenzhang5891 2 роки тому

    Thank you so much, if I got a job by watching this, I want to make a donation.

  • @uniZite
    @uniZite 2 роки тому +1

    Super good tutorial, this really made my day - many thanks !!!
    In the 05_gradients_torch, the difference in results from 05_gradients_numpy is because the derivative function should return 1/N * np.dot(2*x, y_pred-y) where N = 4.
    Then the results are exactly equal.

  • @hankystyle
    @hankystyle 3 роки тому +3

    Thank you for your excellent tutorial! It helps my homework and research a lot!!

  • @haiyangxia9793
    @haiyangxia9793 3 роки тому +1

    Cool, really a very nice course, thanks for your effort to make it free online!!!

  • @saikumarreddyyeddula5043
    @saikumarreddyyeddula5043 2 роки тому +2

    Wow. This course is awesome. An end to end of everything.
    I was wondering why I need to learn about Tensorboard and JSON files (other series) for using Torch. This was very useful to me.

  • @giacomodonini7303
    @giacomodonini7303 2 роки тому +3

    Thank you very much, this tutorial it's super useful and it's making my life better!

    • @ITsmapleTimexD
      @ITsmapleTimexD 2 роки тому

      Right! It's not the backward that isn't precise as he said, if you compute by hand it is indeed -30.

    • @yusun5722
      @yusun5722 Рік тому

      Correct. The np.dot() didn't actually get the mean (but the sum). Hence the gradient is larger than the true value and the convergence is faster.

  • @devadharshan6328
    @devadharshan6328 3 роки тому +1

    Thanks for Ur help I'm able to learn many new things . Keep up this work .
    Thank you

  • @ashishrahul4692
    @ashishrahul4692 2 роки тому +1

    How is it that for a feed forward neural network we zero the gradients first before computing gradients and updating weights @3:08:35, whereas in the case of linear/logistic regression, we zero the gradients after computing them and updating the weights @1:36:19 @1:52:41.
    Intuitively, this should not make any difference, but i wanted to confirm if that truely is the case. Is this just a nomenclature thingy?

  • @luosenanthony8344
    @luosenanthony8344 2 роки тому +2

    I am big fan of your content, It is just so amazing the way you explain the compliacted things in a simple way!

  • @saravanannatarajan6515
    @saravanannatarajan6515 2 роки тому

    Great tutorial! one small point regarding CNN - CIFAR10
    While calculation accuracy , its better to use
    for i in range(len(labels)):
    than
    for i in range(len(lbatch_size)):
    since if last set of batch_size less than original batch_size given then it will throw index bound error

  • @yifuzeng4270
    @yifuzeng4270 3 роки тому +1

    How do you make the pytorch program displayed in the "OUTPUT" of the console under vscode? What plug-ins need to be installed? What are the shortcut keys? I can only run the code in "TERMINAL", which annoys me.

    • @patloeber
      @patloeber  2 роки тому +1

      This is the CodeRunner plugin

  • @py2992
    @py2992 2 роки тому

    This course is amazing !! Thanks of everythink.

  • @alexstream4218
    @alexstream4218 3 роки тому +2

    At 04:40 I needed to open the Anacoda terminal because it didnt recognise the 'conda' comand on the windows terminal.

    • @patloeber
      @patloeber  3 роки тому +1

      Ah yes, on windows you have to either manually add it to your PATH, or simply use the Anaconda terminal

  • @eugenefrancisco8279
    @eugenefrancisco8279 2 роки тому

    Dude this has general helped me so much. Thank you!

  • @byiringirooscar321
    @byiringirooscar321 Рік тому +2

    friend please how can I fix this
    '_MultiProcessingDataLoaderIter' object has no attribute 'next'

    • @byiringirooscar321
      @byiringirooscar321 Рік тому +1

      got we have to wrap next
      datatiter = iter(dataloader)
      data = next(datatiter)
      features, labels = data
      print(features, labels)

  • @alexandreruedapayen6528
    @alexandreruedapayen6528 2 роки тому

    That is an excellent course. Thank you Python Engineer

  • @mannyc6649
    @mannyc6649 Рік тому

    At 1:01:55 you are taking the mean of a scalar, which doesn't do anything. Since you have 4 data points only this effectively means that your learning_rate was multiplied by 4. This is the reason why it seems to work better than PyTorch: this particular case is so well behaved that to speed up is sufficient to take larger steps.

  • @user-su4jh4sp9b
    @user-su4jh4sp9b 2 роки тому

    such a brilliant course !! I thank you so much !!

  • @iworeushankaonce
    @iworeushankaonce 3 роки тому

    Well done, a very smooth intro to PyTorch.