If you are reading this you are the ten percent (as of the time of writing this) that didn't up and leave after the intro. I hope to see you all at lecture 22.
I'm really just skimming these to better form intuition. I'm not sure what you mean by do the course together, I'd be happy to discuss anything in the lectures but I'm not going on to do any projects with computer vision out of this.
Didn't even see they had those lol, Imma still stick with my original plan tho. I'm trying a more organic entrance to ml. I made some really rudimentary search algos like queue, stack, greed, and astar and have now started generating mazes. I want to try and train something that looks like astar search. It's a long way from deep learning but I don't think I can make that leap and still know everything that's going on. Maybe I'll join you a month from now, I'd still be happy to discuss the topics with you.
For the nearest neighbor classifier isn't training time going to be O(n)? If we are going to store pointers for each training example, we still have to iterate over the number of training examples, which is n.
Well, maybe I didn't get something, but I totally disagree about the train-valid-test idea as Justin described it. We train a model on train data and evaluate on valid set to change a model behavior. That's correct, however, it does not mean we should look at the test set split only at the beginning of our research. We should estimate our model on the test set at least several times and if the model performance is very different on the test set in comparison to the validation set it means something was done very wrong - e.g. splitting strategy. Of course, using the test set influences our decisions, but how much? Can you say that the estimation of the ready model on the test set really spoils everything? I doubt that.
If you are reading this you are the ten percent (as of the time of writing this) that didn't up and leave after the intro. I hope to see you all at lecture 22.
you want to do this course together?
I'm really just skimming these to better form intuition. I'm not sure what you mean by do the course together, I'd be happy to discuss anything in the lectures but I'm not going on to do any projects with computer vision out of this.
@@conradwiebe7919 I was planning to do all the HWs/ Assignments given on the course website along with the lectures
Didn't even see they had those lol, Imma still stick with my original plan tho. I'm trying a more organic entrance to ml. I made some really rudimentary search algos like queue, stack, greed, and astar and have now started generating mazes. I want to try and train something that looks like astar search. It's a long way from deep learning but I don't think I can make that leap and still know everything that's going on. Maybe I'll join you a month from now, I'd still be happy to discuss the topics with you.
@@conradwiebe7919 then you should start with the ml course taught by Andrew Ng
Much better audio thanks!
Great lectures!! Pls keep posting the latest series! Thank you!!
Very good teaching of computer vision! Thanks Justin Johnson for these very nice lectures.
25:22 He just described a well-known exam technique beloved of students everywhere!
He taught the essential in a great way
I like how he says.. 'This is WRONG.. so bad... you should not do this!' cracks me up for some reason
Thank you for the lecture! Greetings from Ukraine)
Hi
I thought the MNIST dataset had 60k training images. Or?
thanks! such an informative video
For the nearest neighbor classifier isn't training time going to be O(n)? If we are going to store pointers for each training example, we still have to iterate over the number of training examples, which is n.
If you have to iterate over the elements - yes. If you just copied a list, it's probably a single pointer
That Hot dog and not hot dogs was from Silicon Valley. The professor watches the show :)
how can i get the homework anyone knows?
Assignments? Check out this course page in description
14:06
Well, maybe I didn't get something, but I totally disagree about the train-valid-test idea as Justin described it. We train a model on train data and evaluate on valid set to change a model behavior. That's correct, however, it does not mean we should look at the test set split only at the beginning of our research. We should estimate our model on the test set at least several times and if the model performance is very different on the test set in comparison to the validation set it means something was done very wrong - e.g. splitting strategy. Of course, using the test set influences our decisions, but how much? Can you say that the estimation of the ready model on the test set really spoils everything? I doubt that.
Nope, your model is not allowed to look at the test set during tuning even a peak. You as a model will also over-fitt. 😂
27:02