Lecture 2: Image Classification

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 28

  • @conradwiebe7919
    @conradwiebe7919 4 роки тому +106

    If you are reading this you are the ten percent (as of the time of writing this) that didn't up and leave after the intro. I hope to see you all at lecture 22.

    • @m-aun
      @m-aun 4 роки тому +2

      you want to do this course together?

    • @conradwiebe7919
      @conradwiebe7919 4 роки тому +1

      I'm really just skimming these to better form intuition. I'm not sure what you mean by do the course together, I'd be happy to discuss anything in the lectures but I'm not going on to do any projects with computer vision out of this.

    • @m-aun
      @m-aun 4 роки тому

      @@conradwiebe7919 I was planning to do all the HWs/ Assignments given on the course website along with the lectures

    • @conradwiebe7919
      @conradwiebe7919 4 роки тому +2

      Didn't even see they had those lol, Imma still stick with my original plan tho. I'm trying a more organic entrance to ml. I made some really rudimentary search algos like queue, stack, greed, and astar and have now started generating mazes. I want to try and train something that looks like astar search. It's a long way from deep learning but I don't think I can make that leap and still know everything that's going on. Maybe I'll join you a month from now, I'd still be happy to discuss the topics with you.

    • @m-aun
      @m-aun 4 роки тому

      @@conradwiebe7919 then you should start with the ml course taught by Andrew Ng

  • @guavacupcake
    @guavacupcake 4 роки тому +17

    Much better audio thanks!

  • @xanderlewis
    @xanderlewis Рік тому +1

    25:22 He just described a well-known exam technique beloved of students everywhere!

  • @terryliu3635
    @terryliu3635 7 місяців тому +3

    Great lectures!! Pls keep posting the latest series! Thank you!!

  • @raphaelmourad3983
    @raphaelmourad3983 4 роки тому +7

    Very good teaching of computer vision! Thanks Justin Johnson for these very nice lectures.

  • @huesOfEverything
    @huesOfEverything 3 роки тому +2

    I like how he says.. 'This is WRONG.. so bad... you should not do this!' cracks me up for some reason

  • @zhaobryan4441
    @zhaobryan4441 10 місяців тому

    He taught the essential in a great way

  • @andrewstang8590
    @andrewstang8590 9 місяців тому +1

    Hi
    I thought the MNIST dataset had 60k training images. Or?

  • @DariaShcherbak
    @DariaShcherbak 5 місяців тому

    Thank you for the lecture! Greetings from Ukraine)

  • @veggeata1201
    @veggeata1201 4 роки тому +6

    For the nearest neighbor classifier isn't training time going to be O(n)? If we are going to store pointers for each training example, we still have to iterate over the number of training examples, which is n.

    • @bhavin_ch
      @bhavin_ch 4 роки тому +11

      If you have to iterate over the elements - yes. If you just copied a list, it's probably a single pointer

  • @훼에워어-u1n
    @훼에워어-u1n Рік тому

    thanks! such an informative video

  • @adarshtiwari6374
    @adarshtiwari6374 4 роки тому +2

    14:06

  • @randomsht-cy7we
    @randomsht-cy7we 4 місяці тому

    That Hot dog and not hot dogs was from Silicon Valley. The professor watches the show :)

  • @mahmoudatiaead7347
    @mahmoudatiaead7347 Рік тому

    how can i get the homework anyone knows?

  • @ДаниилГусев-с9л
    @ДаниилГусев-с9л 2 роки тому

    Well, maybe I didn't get something, but I totally disagree about the train-valid-test idea as Justin described it. We train a model on train data and evaluate on valid set to change a model behavior. That's correct, however, it does not mean we should look at the test set split only at the beginning of our research. We should estimate our model on the test set at least several times and if the model performance is very different on the test set in comparison to the validation set it means something was done very wrong - e.g. splitting strategy. Of course, using the test set influences our decisions, but how much? Can you say that the estimation of the ready model on the test set really spoils everything? I doubt that.

    • @sampathkovvali6255
      @sampathkovvali6255 Рік тому +1

      Nope, your model is not allowed to look at the test set during tuning even a peak. You as a model will also over-fitt. 😂