Transfer Learning (C3W2L07)

Поділитися
Вставка
  • Опубліковано 5 січ 2025

КОМЕНТАРІ • 47

  • @rogerab1792
    @rogerab1792 5 років тому +7

    Transfer learning is the key to AGI, once a Neural Network learns the patterns of logical relationships and it is able to transfer that learning and apply it to new problems an AI will be able to draw intelligent conclusions. All that is needed is an AI that picks patterns in logical problems and learns from it's conclusions, once it has learned it needs to pick similarities between new problems and the old already solved ones to transfer it's neural pathways but also COMBINE them to deduct new conclusions( combine them taking into consideration the problem that is being aimed to solve ), conclusions which will be useful to solve new problems and so on... ( The key concept here is to find a way to 'COMBINE' trained neural nets to build newer, smarter and more general ones, an AI should be trained to learn to combine specific neural nets to solve new problems related to the ones already solved, then use that AI that combines pathways to assist in the combination of neural nets for different problems, combining not only neural nets themselves but the neural nets that were used to combine nets in the past to create better more general combinations of nets ). All this will keep building on itself and AGI will become more capable faster and faster as time passes by.

  • @sehaba9531
    @sehaba9531 4 роки тому +11

    Very clear and simple explanation, thank you so much

  • @kpagrawal2306
    @kpagrawal2306 3 місяці тому +2

    Very Nice Explanations - Dr Andrew

  • @salehjamali6716
    @salehjamali6716 2 роки тому

    Thank you alot for summarizing the whole concept in one small video. ❤️

  • @SaniaSinghania
    @SaniaSinghania Рік тому +3

    Don't know why people appreciate him. He does not break down complex concepts in simpler terms at all.

    • @TragicGFuel
      @TragicGFuel 11 місяців тому +3

      Are you being sarcastic?

    • @Ash-bc8vw
      @Ash-bc8vw 3 місяці тому

      You can input his explanation into chatgpt and ask it explain to you as if you were a 3yr old.
      Because he is already made it simple.

  • @hannahJane300
    @hannahJane300 5 місяців тому

    I am reading a paper on GNN. There are terms i did not understand. Thank you so much.

  • @nadirshah8600
    @nadirshah8600 2 місяці тому

    in a case of time series problem, can we do transfer learning for the exact dataset as that of pre-trained model data set?

  • @spacecapitalism7152
    @spacecapitalism7152 6 років тому +19

    Video is done at 1:25 Lol!! he explained it so simply.

    • @trexmidnite
      @trexmidnite 3 роки тому

      Maybe your brain is full at that

    • @MrZouzan
      @MrZouzan 3 роки тому +1

      @@trexmidnite rude

  • @hazema.6150
    @hazema.6150 Рік тому

    Wonderful, thanks for uploading this video

  • @waliullahmahir869
    @waliullahmahir869 Рік тому

    Nice explanation 😍

  • @johncsheath5037
    @johncsheath5037 2 роки тому

    Brilliant, Andrew, thank you

  • @xDevoneyx
    @xDevoneyx 4 роки тому +1

    Assuming in this example, because it is about images, we are talking about neural networks with convolution layers, right? Then I think of the visualizations of the filters in the convolution layers. And I do not understand how images of cats, have similar structures to images of cells/tissue/bones in radiology. I can imagine that a network which is trained on lots of pictures of pebbles could help pre-training for images of cell tissue, because of the somewhat similar circular/elliptical structure. Could you comment on this?
    Another thing I am confused about is that you mention you could only retrain the last layer. Typically in a convnet this is a dense layer. Does that mean that there are cases in which no convolution layers are retrained, yet the network is effective in predicting types of images it have never seen, by just retraining the dense layer?
    Thanks for the video, much appreciated!

  • @arnabmondal601
    @arnabmondal601 4 роки тому +1

    Very clear explanation.

  • @arpege3618
    @arpege3618 4 роки тому

    man thanks for the info i like your explaining and manner
    thank you again mister

  • @HanhTangE
    @HanhTangE 4 роки тому +2

    Pretty intuitive. I luv it :)

  • @mdasadullahturja1481
    @mdasadullahturja1481 6 років тому +3

    Great explanation !!

  • @shigstsuru765
    @shigstsuru765 4 роки тому

    Got a question:
    Does transfer learning work, if task a and b has same input but different column varieties?
    So let’s say A and B’s task is to detect emotion (let’s say if the person likes it or dislikes it)
    A has better detection rate than B and I’m trying to transfer the high detection rate of A to B.
    Data A has anger, sorrow, joy, excitement, and Data B has anger, joy, excitement.
    I am super-amateur in this field so I’m not sure if I’m talking about anything plausible but it’s be a great help if I the scenario is plausible or not.
    Many thanks

    • @nitishjaiswal6610
      @nitishjaiswal6610 4 роки тому

      I guess that's exactly what transfer learning is about! From his example as well, the image recognition dataset will have low level features which are used during radiology diagnosis. Similarly in your example, if you train using 5 emotions, there'll be low level features which will help you to detect a different dataset (with 3 emotions) as well. We just might need to retrain the network as the new dataset doesn't contain the 2 dropped emotions so we need to make adjustments to the previous training to accommodate all the test data in the new classes

    • @andrewkreisher689
      @andrewkreisher689 4 роки тому

      yep thats pretty much what i would use it for

  • @kabokbl2412
    @kabokbl2412 2 роки тому

    Doesn't retraining the dataset simply preserve the model architecture (i.e the sequence and types of layers) since the wieghts and biases are retrained/fineTuned?

  • @nickbelanger5225
    @nickbelanger5225 3 роки тому

    Very nice explanation

  • @StefanBrock_PL
    @StefanBrock_PL 2 роки тому

    Very usefull background.

  • @shashanksharma7202
    @shashanksharma7202 5 років тому

    How to handle input data if the aspect ratio of pre-trained models are different than input images?? for example, Say aspect ratio for Task A image recognition is 224 x 224 and aspect ratio for Task B diagnosis is 250 x 125

    • @linachato5817
      @linachato5817 5 років тому +1

      by resizing the dataset (scaling or crop). if opposite situation, you can add a border to each image!

  • @nands4410
    @nands4410 6 років тому +2

    amazing!

  • @raulmaldonado3477
    @raulmaldonado3477 6 років тому +7

    is this video part of a bigger course?

    • @dtienloi
      @dtienloi 6 років тому

      Raul Maldonado yes

    • @AnshumanKumar007
      @AnshumanKumar007 6 років тому +1

      It's a part of the course on Coursera

  • @littletiger1228
    @littletiger1228 11 місяців тому

    beautiful

  • @MOHSINALI-bk2qo
    @MOHSINALI-bk2qo 5 років тому +1

    thank you sir

  • @Fatima-kj9ws
    @Fatima-kj9ws 4 роки тому

    Great Thanks

  • @swagatggautam6630
    @swagatggautam6630 Рік тому

    Time to reshoot the video with higher quality camera...

  • @manojsriramula2355
    @manojsriramula2355 4 роки тому

    A big thanks

  • @shahi_gautam
    @shahi_gautam 5 років тому

    can we use the concept of transfer learning on SVM?

    • @linachato5817
      @linachato5817 5 років тому +2

      Yes, you can use the pre-trained model for feature extraction and then use the features matrix in training SVM, NN,....

  • @saurabhagarwal8970
    @saurabhagarwal8970 4 роки тому +2

    Code for Transfer Learning anyone ??

  • @THEGASDRIP
    @THEGASDRIP Рік тому

    this is cool

  • @miketsui3a
    @miketsui3a 4 роки тому +2

    now we are in 2020, but this vid is still in 360p

    • @poojakabra1479
      @poojakabra1479 2 роки тому

      Ikr, at first I suspected this wasn’t the original channel

  • @rogerab1792
    @rogerab1792 4 роки тому

    What if you trained the network on both sets at once, keeping the two different output layers? This way the common net doesn't get biased towards one set or the other and maybe even generalises better to new data for both sets. Would be like a regularization technique where you update the weights to both fitting on one set and regularizing on the other. If this technique already exists someone please reply, I'd like to see the results and wether it is useful for regularization.

  • @736939
    @736939 2 роки тому

    OK, now how to program it in pytorch. Update the architecture of the NN and train it on the different model. HOW?