Hyperparameter Optimization - The Math of Intelligence #7

Поділитися
Вставка
  • Опубліковано 12 січ 2025

КОМЕНТАРІ • 139

  • @surajthapa4160
    @surajthapa4160 3 роки тому +5

    You are one of the rare educators who can make smile their viewers in between learning which makes learning flowless. I believe without any stop I can watch your 1 hr long content too. Thanks for making learning easy and funny.

  • @JulianHarris
    @JulianHarris 5 років тому +7

    Holy crap I can say with confidence this is the funniest introduction to hyperparameter optimisation there will ever be. Ever. Genius work. You don't call any more, but that's ok. Live your live, enjoy it! Be free! Be yourself!

  • @DaredevilGotU
    @DaredevilGotU 6 років тому +1

    Every time I visit the page, I learn a new technique. Thanks Siraj

  • @tumul1474
    @tumul1474 6 років тому +7

    dude u making learning so awesome !! great work

  • @peretz7
    @peretz7 7 років тому +19

    Your geeky / cringey jokes are the best! Don't stop. Seriously.

  • @Skythedragon
    @Skythedragon 7 років тому +49

    Yo dang, I heard you like optimizers, so I made an optimizer to optimize your optimizer

  • @dexterdev
    @dexterdev 7 років тому +3

    Thanks for the effort you are taking for these videos. I respect it. :)

  • @powerrabbit
    @powerrabbit 7 років тому

    This is the coolest channel on UA-cam!

  • @abhilashjoy
    @abhilashjoy Рік тому

    Give this man a raise!

  • @phillipotey9736
    @phillipotey9736 4 роки тому

    Just figured something out with nodes. Length amount of nodes is cleverness, height of nodes is smartness.

  • @manmoffatt8497
    @manmoffatt8497 4 роки тому

    Love the energy Siraj

  • @s3wannabesaliha238
    @s3wannabesaliha238 5 років тому

    Pretty cool video...good job Siraj.. thankyou...

  • @rajatmaheshwari916
    @rajatmaheshwari916 7 років тому +58

    Et u Brute Force... I laughed so hard at this point.

  • @Ninja-iq2xt
    @Ninja-iq2xt 7 років тому

    Loved this pace, atleast it makes us understand whats going on, as compared to the previous vidoes, which are quantity over quality.

  • @solidsnake013579
    @solidsnake013579 5 років тому +4

    i was drinking my tea when i heard biggie and 2pac. jesus almost spitted my tea out

  • @elsmith1237
    @elsmith1237 6 років тому +1

    This is amazing! Thanks for the video :)

  • @randompast
    @randompast 5 років тому

    Good explanation, illuminated a few things for me, thank you.

  •  6 років тому +2

    bayesian optimization itself has hyperparameters, like kernel window size for gaussian process fitting.... all these BO libraries have these parameters at some default value, which almost always does not work for your model at hand...

  • @phil.s
    @phil.s 7 років тому

    Thank you for this video, i am currently testing Scikit Optimize to optimize the network i am currently working on.
    It supports Bayesian optimization and is simple to implement where as hyperas likes to give errors.

  • @FinanceLogic
    @FinanceLogic 2 роки тому

    Bayes is not as random as it seems you think around 5 minutes in. But I did learn a lot here. thanks.

  • @FinanceLogic
    @FinanceLogic 2 роки тому

    It just clicked how a random forest really works less than 1 minute into this video. i feel sick because the world is so interesting.

  • @luck3949
    @luck3949 7 років тому +11

    Can we train a neural network to optimize hyperparameters?

    • @SirajRaval
      @SirajRaval  7 років тому +5

      ive never read a paper thats done that, but totally possible! All functions are neural networks if you stare at them long enough, you should definitely try it out

    • @freediugh416
      @freediugh416 7 років тому +1

      -----------> . (tactical dot in case OP wants to share results)

    • @AkashSwamyBazinga
      @AkashSwamyBazinga 7 років тому

      Yes, i guess. Try using GPyOpt which is basically a black-box function optimization library written in python.

    • @deeplearningpartnership
      @deeplearningpartnership 6 років тому

      Yes.

    • @hfkssadfrew
      @hfkssadfrew 6 років тому

      If you have hundreds of hyper parameter, this would be better than GP, but usually we don’t.

  • @keturananny6559
    @keturananny6559 4 роки тому

    I enjoyed this so much!

  • @alvincay100
    @alvincay100 7 років тому +8

    Just when you think you've heard every pronunciation of Gaussian possible...

    • @SirajRaval
      @SirajRaval  7 років тому

      haha always something new

  • @justinwhite2725
    @justinwhite2725 3 роки тому

    Here I was thinking early in the video 'would a Monte Carlo approach work?' when you got into talking about exploration /exploitation I think it might.
    This higher level math you are doing here I don't get (or maybe I need someone else to explain it) but Monte Carlo is something I've used before and I think it might be good enough.
    You could seed in a set of likely values and let it add new ones when it heads to an upper or lower bound. The nice thing about Monte Carlo is that it would explore possibilities as the model matures and switch over to something if it winds up performing better.
    This obviously works better for integer parameters than for gradual values.

  • @antonylawler3423
    @antonylawler3423 7 років тому

    I've only ever seen the Kernel Trick glossed over. I'd love it if you could find an opportunity to spend a few minutes on it.

  • @ketanpandey2561
    @ketanpandey2561 6 років тому +1

    What about the genetic algorithm? Can they be used to optimize hyperparameters? For example using TPOT libaray.

  • @Egop3105
    @Egop3105 7 років тому +3

    For a tutorial on how to install and use Spearmint (an awesome Bayesian Optimization library by Jasper Snoek) check out this link: bitbucket.org/uhasseltmachinelearning/spearmint

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 7 років тому

    thank you for hyperparameters video

  • @_jiwi2674
    @_jiwi2674 5 років тому +1

    Hi Siraj, thanks tons for the video! I am unsure of what you meant by utility of the expectation of function f. You said it tells us which region of domain of f are best to sample from, but I can't quite follow what you mean by that. Would highly appreciate some help with this!

  • @planktonfun1
    @planktonfun1 6 років тому

    frequentist, bayesian their result are almost same for the first 20% of result data, but bayesian also includes uncertainty so there's that.

  • @zihanqiao2850
    @zihanqiao2850 4 роки тому

    Love your videos.

  • @Nola1222Piano
    @Nola1222Piano 7 років тому +3

    Want to make a neural network that converts fiction books to moviescripts. And then based on the character descriptions in the book find tge best actor in a db. And based on the information in the book find good filming locations. Im very new to AI and dont know anything. Is this possible with AI? Should I train on 3 different datasets and how? And what NN should I use to do all of that at the same time?

    • @SirajRaval
      @SirajRaval  7 років тому +1

      would be great, use IMDB dataset

  • @swamysriman7147
    @swamysriman7147 4 роки тому

    So.....Gradient Descent is a special case of Bayesian Optimization ri8?

  • @normannborg
    @normannborg 4 роки тому

    came here for Hyperparameters Optimization, found SVM explaination

  • @floopybits8037
    @floopybits8037 2 роки тому

    Nice explanation

  • @NataliaRevenga
    @NataliaRevenga 2 місяці тому

    Can we talk about the amazing song of bayesians vs frequentists?

  • @adityashinde6202
    @adityashinde6202 7 років тому

    How about using evolutionary algorithms to search for optimum values of hyperparameters? I'm not sure how well it works in comparison to bayesian optimization though.

  • @RaymondWong
    @RaymondWong 6 років тому

    For tuning hyperparameter, how does bayesian optimization compares to PSO? Any risk of overfitting when tuning the hyperparameters?

    • @lucasnildaimon7598
      @lucasnildaimon7598 5 років тому

      To answer the second question, yes, overfitting still is an open problem in Hyperparameter Optimization. You can find some information about some adopted methods that try to avoid this in Section 1.6.4 of this book: www.automl.org/wp-content/uploads/2019/05/AutoML_Book_Chapter1.pdf

  • @deepak3303
    @deepak3303 7 років тому

    For a classification model, how to optimize hyper parameters using CAP curve analysis?

  • @kaikewesleyreis
    @kaikewesleyreis 7 років тому

    There's going to be a video about Feedforward neural net?

  • @deepak3303
    @deepak3303 7 років тому

    why not use a binary search algorithm to eliminate the half of the possible hyper prameter rather than brute force?

  • @gbengaomotara2102
    @gbengaomotara2102 6 років тому

    Any schemes for initializing the likelihoods?

  • @vijaykoravi7583
    @vijaykoravi7583 7 років тому

    hey siraj can you tell us about replika..

  • @jasneetsingh4018
    @jasneetsingh4018 7 років тому

    Docker tutorial please...muchh needed!!

  • @vipulsonar4561
    @vipulsonar4561 6 років тому

    Cool..!!
    Can you plz tell me some best algorithm which can be used for video summarisation....!!

  • @rahulahoop1
    @rahulahoop1 4 роки тому +2

    thank you for making data science entertaining for reals, would you be able run some more examples with the concepts as you explain them in future videos?:)

  • @chasegraham246
    @chasegraham246 7 років тому +3

    1:35 Getting kind of edgy, Siraj.

  • @manwa5192
    @manwa5192 7 років тому

    What are you doing in Amsterdam brother? You work there now?

  • @williamchamberlain2263
    @williamchamberlain2263 6 років тому

    Isn't the Kernel Trick that you don't really transform the data points at all? You just use a similarity function that is equivalent to the inner product calculation that _would_ happen after transforming to a high-dimension space with some kernel: the Kernel Trick is that there is no kernel.

  • @sonOfLiberty100
    @sonOfLiberty100 7 років тому

    Why should be the TF/IDF a better strategy instead of Bag-of-Words? I Think it depend on the application.

  • @masteronepiece6559
    @masteronepiece6559 7 років тому

    Great video .
    Thanks .

  • @AIwithAniket
    @AIwithAniket 3 роки тому

    Man have to make new videos 🙌. Don't lose hope

  • @irenesforeheadforlyve5820
    @irenesforeheadforlyve5820 6 років тому

    Is it possible for me to optimize the neurons inside convolution layer for image classification?

  • @deniscandido4116
    @deniscandido4116 7 років тому

    Is this already implemented on some library like sklearn or keras? I never read about this before and looks very promising

  • @benbenjamin5
    @benbenjamin5 7 років тому

    Hey man, I've got a sorta unrelated question.. Have you heard of useaible and what do you think, I've heard some pretty crazy stuff but I can't really find much on it.. Is it legit? Anyway thanks, great video as always.

  • @gunjannaik7575
    @gunjannaik7575 7 років тому

    How can we use this to predict new parameters?

  • @heri_prieto
    @heri_prieto 7 років тому

    Siraj, where can I go to get the latest in deep learning publications so I can then replicate the results?? Thank you! You are the shit!

  • @shreysharma7806
    @shreysharma7806 6 років тому

    From where Bayesian Optimization get the initial value of C and gamma?

    • @lucasnildaimon7598
      @lucasnildaimon7598 5 років тому

      It's a prior belief, so it means that you or the person coding should assume their initial values.

  • @hammadshaikhha
    @hammadshaikhha 7 років тому

    I am looking for clarification on the homework this week because I think I have gotten confused between bayesian regression and bayesian optimization for finding hyper parameters. Is it correct to say that in a linear regression the hyper parameter is the gradient descent learning rate, and not the slope coefficients. So we first use bayesian optimization to find a good learning rate, and then run gradient descent to estimate the coefficient parameters? If this is true, I imagine we still want to minimize the sum of square errors?
    Someone let me know if I am on the right track, thx.

    • @SirajRaval
      @SirajRaval  7 років тому

      Hey Hammad! Great question. You can choose to do either, both are really cool ideas. Example of Bayesian regression: github.com/tdomhan/pyblr & for bayesian optimization for linear regression, what you said is correct, its used to first find the optimal learning rate, while gradient descent estimates the coefficient parameters.

    • @hammadshaikhha
      @hammadshaikhha 7 років тому

      Thanks for the clarification Siraj. I am going to do the Bayesian linear regression notebook, hopefully someone else does the Bayesian optimization to find gradient descent parameter.

    • @laidbackmedia
      @laidbackmedia Рік тому

      Bias routines involve illusions
      Diverge or continue

  • @karankatiyar5414
    @karankatiyar5414 6 років тому

    how do we do it in tensorflow ?

  • @thoughtsofapeer
    @thoughtsofapeer 7 років тому

    Hi Siraj, I was thinking if you would like to make a video for all of us new CS-students out there on "Good to know basics".
    I am from Denmark, so we dont have quite the same educational system. I am coming from the equivalent to high school and have just been accepted to the Danish University of Technology where I will study Software technology. This is a bachelor which I will get in three years, then continuing with a to-year candidate/masters. I have no prior knowledge on programming or discrete math what-so-ever 😱
    ty
    Edit: I will be starting September 5th :D

  • @shuvendubikash3792
    @shuvendubikash3792 7 років тому +6

    You have a problem. Your videos have no learning sequence. I don't understand where to start and where to go

    • @diogojvc
      @diogojvc 7 років тому +6

      You have to train your biological neural network to learn new things based on past experiences. xD
      The best way to start is to ... start. I mean start by building the most basic thing and then as you watch new videos you start to mess with some new things, at least is what i have been doing. That being said the most important videos to start the most basic thing are the "math of intelligence" videos. Hope it helps and good luck ;)

    • @hammadshaikhha
      @hammadshaikhha 7 років тому +2

      I felt the same way when I first found this channel and was watching random videos in no order. Currently you are watching the 7th video in this series, have you watched 1-6 already? He does have a sequence, and its becoming better and getting connected together over time. If you go to his channel and look at play lists, he has 1) Python for Data Science, 2) Math of Intelligence, I think these would be the starting points.

    • @shuvendubikash3792
      @shuvendubikash3792 7 років тому

      @Akujin yes it does.
      But what you mean by "math of intelligence"? . This playlist or anything else

    • @diogojvc
      @diogojvc 7 років тому +1

      Yes, i meant the playlist.

    • @SirajRaval
      @SirajRaval  7 років тому +1

      what ran domness said

  • @matthewdaly8879
    @matthewdaly8879 7 років тому +4

    What about gradient descent?

    • @eloyeligon6676
      @eloyeligon6676 7 років тому

      The problem is how do you calculate the gradient

  • @edouarddelaire1939
    @edouarddelaire1939 7 років тому

    I've already seen people using genetics algorithms in order to find Hyperparameters. but i thinks that's not very efficient :/

  • @xumeixi382
    @xumeixi382 7 років тому

    Great video! really makes me laugh

  • @chengjunli4518
    @chengjunli4518 7 років тому +1

    can i get the subtitle ,thanks

  • @colox97
    @colox97 7 років тому

    quantum computers may be very very useful tor this kind of task, they'd parallelize the entire process and allow REAL BIG data to be handled buch better.
    isn't this a P problem?
    am i right?

  • @tommyeastman2999
    @tommyeastman2999 6 років тому

    that song is a jam

  • @GuillaumeVerdonA
    @GuillaumeVerdonA 6 років тому

    HAHAHA that "mmm look at that Gaussian" meme has a pic from a McGill prof I knew

  • @rawiasammout5833
    @rawiasammout5833 7 років тому

    please need to understand svm and pso

  • @qunchongqa
    @qunchongqa 9 місяців тому

    interesting and useful

  • @CrazyGamerSidh
    @CrazyGamerSidh 7 років тому +1

    Background 🙃

  • @jmoz
    @jmoz 5 років тому

    Equally interesting and ridiculous.

  • @JosephQPham
    @JosephQPham 6 років тому

    humor and intelligence

  • @АнтонДостоевский-ж2ш

    Cool skunk, thank you

  • @michaelvarney.
    @michaelvarney. 7 років тому

    Gauss, as in louse, not Gauss, as in boss.

  • @killordie2412
    @killordie2412 7 років тому

    Can you share your collection of memes please?

    • @chicken6180
      @chicken6180 7 років тому

      No you see his memes change over time he doesnt even find memes anymore he has software to crawl the web and predict which memes siraj will most enjoy

    • @SirajRaval
      @SirajRaval  7 років тому

      what spark said

    • @Ur.Podcast_Buddy
      @Ur.Podcast_Buddy 4 роки тому

      @@SirajRaval waiting for new video

  • @Privacy-LOST
    @Privacy-LOST 5 років тому +1

    If you think Siraj is exciting, have a look at this awesome dude on the same topic :
    ua-cam.com/video/con_ONbhD2I/v-deo.html 😂

  • @exec9292
    @exec9292 2 роки тому +1

    who did u copy to make this video lol

  • @nickmcneely5601
    @nickmcneely5601 7 років тому +11

    G-owwww-sian, not G-awwwww-sian.

    • @nickmcneely5601
      @nickmcneely5601 7 років тому

      Igotattitude93 That's what I said.

    • @nickmcneely5601
      @nickmcneely5601 7 років тому

      Igotattitude93 I know plenty of Americans who say it correctly. Same for Euler. Shit, even Nietzsche.

    • @_sudipidus_
      @_sudipidus_ 7 років тому +3

      Pronunciation depends on your hyperparameter selection :P

    • @SirajRaval
      @SirajRaval  7 років тому

      thank you

  • @blindfoldchessabhi
    @blindfoldchessabhi 6 років тому +1

    Yey

  • @chicken6180
    @chicken6180 7 років тому

    💯

  • @MrPanthershah
    @MrPanthershah 4 роки тому

    I somehow find all that animation distracting to get the point across. Mehhh

  • @yb801
    @yb801 6 років тому

    I suck

  • @averageengineeer
    @averageengineeer 7 років тому

    Headache !!! :(

    • @SirajRaval
      @SirajRaval  7 років тому

      please clarify, what specifically gave you a headache? thanks

  • @ismaelgoldsteck5974
    @ismaelgoldsteck5974 7 років тому

    Boi I'm early

  • @AkarshSundareswar
    @AkarshSundareswar 7 років тому +5

    First :p

    • @SirajRaval
      @SirajRaval  7 років тому +2

      congrats

    • @AkarshSundareswar
      @AkarshSundareswar 7 років тому +7

      I want to thank my parents, teachers, brother, sister and my dog for this great opportunity. Without them, this would not be possible.

  • @PCCoooler
    @PCCoooler 4 роки тому

    I can't tell you how much I hate this guy, but this is the only video that explains what I want to know :'(

  • @vaptua4109
    @vaptua4109 7 років тому

    Second