Automated Machine Learning: Grid Search and Random Search

Поділитися
Вставка
  • Опубліковано 29 вер 2024
  • In this video, we look at two methods to perform hyperparameter optimization (finding the best hyperparameters for a learning algorithm to maximize the performance), namely grid search and random search.
    If you liked the video, make sure to share it with others!
    Any comments, feedback, or questions? Let me know through the comments section!

КОМЕНТАРІ • 10

  • @papaglory01
    @papaglory01 Рік тому +1

    Why are we using logUniform and not just uniform? I didnt quite get that.

    • @aixplained4763
      @aixplained4763  Рік тому +7

      Great question! LogUniform sampling is especially useful when the range of the hyperparameter of interest spans multiple orders of magnitude. For example, suppose we want to find a good value for the learning rate within a range [0.01, 100]. Because the interval [10,100] is larger than [1, 10], uniform sampling is more likely to sample more points in the interval [10,100]. However, what if we want to sample uniformly across the different orders of magnitude? By sampling LogUniformly, we can achieve this, since the distance between [1,2] (corresponding to [10,100]) is equal to the distance [0,1] (corresponding to [1, 10]). If you'd like to know more, this is a good blog post (answer was inspired by this): towardsdatascience.com/why-is-the-log-uniform-distribution-useful-for-hyperparameter-tuning-63c8d331698

    • @papaglory01
      @papaglory01 Рік тому +1

      @@aixplained4763 Amazing! I get it now, thank you for this and the vid.

  • @mohammadadnan7319
    @mohammadadnan7319 Рік тому +1

    Great explanation. Would be helpful if Hyperparameter Tuning was also included in the vid along with other background information.

    • @aixplained4763
      @aixplained4763  Рік тому

      Good to hear that you found it helpful! Thank you for the feedback; this is covered in some of the previous videos (part of this playlist: ua-cam.com/video/U-XYvDhGQ2c/v-deo.html&pp=gAQB)

  • @zukofire6424
    @zukofire6424 8 місяців тому

    Thank you for the explanation, I found it very clear :)

  • @drazen.antunovic
    @drazen.antunovic 6 місяців тому

    How often are these methods even used in practise because from what I have realized you would need to retrain neural network as many times as there are combinations/configurations?

    • @aixplained4763
      @aixplained4763  6 місяців тому +1

      Great question! Actually, these methods are commonly adopted in practice to optimize the performance of the problem at hand. You indeed have to retrain the neural network as many times as there are configurations. This is expensive but can be worked around by testing the configurations on a subset of the data. Also, it is possible to use successive halving to make the hyperparameter search more efficient.

  • @sabisme7009
    @sabisme7009 8 місяців тому

    thank you so much for the video!

  • @jamalnuman
    @jamalnuman 7 місяців тому

    great