Ridge vs Lasso Regression, Visualized!!!

Поділитися
Вставка
  • Опубліковано 26 лис 2024

КОМЕНТАРІ • 424

  • @statquest
    @statquest  2 роки тому +5

    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

  • @platanus726
    @platanus726 3 роки тому +19

    You are truly an angel. Your videos on Ridge, Lasso and Elastic Net really helps with my understanding. It's way better than the lectures in my university.

  • @Mabitesurtaglotte
    @Mabitesurtaglotte 4 роки тому +171

    Still the best stat videos on UA-cam
    You have no idea how much you've helped me. You'll be in the acknowledgments of my diploma

    • @statquest
      @statquest  4 роки тому +11

      Wow, thanks!

    • @johnnyt5108
      @johnnyt5108 3 роки тому +8

      He'd probably like better to be in the acknowledgments of your checkbook then

    • @reflections86
      @reflections86 Рік тому

      @@johnnyt5108 I am sure many people will do that by buying the reports and the book Josh has written.

    • @LauraMarieChua
      @LauraMarieChua Рік тому +2

      update after 2 years: did u include him on ur diploma?

    • @chzpan
      @chzpan 2 місяці тому

      Agree with you, yet "unfortunately, no one asked me!".

  • @thryce82
    @thryce82 4 роки тому +6

    this channel is saving my ass when it comes to applied ml class. so frustrating when a dude who has been researching Lasso for 10 years just breaks out some Linear algebra derviation and then acts like your suppose to instantly understand that...... thanks for taking the time to come up with an exhbition that makes sense.

  • @huzefaghadiyali5886
    @huzefaghadiyali5886 2 роки тому +20

    I'm just gonna take a minute to appreciate the effort you put in your jokes to make the video more interesting. Its quite underrated.

  • @afaisaladhamshaazi7519
    @afaisaladhamshaazi7519 4 роки тому +11

    I was wondering why I missed out on this video while going through the ones on Ridge and Lasso Regression from Sept-Oct 2018. Then I noticed this is a video you put out only a few days ago. Awesome. Much gratitude from Malaysia. 🙇

  • @hopelesssuprem1867
    @hopelesssuprem1867 Рік тому +2

    Many people on the Internet explain regularization of regression using polynomial features in the data that ridge and lasso are allegedly used to reduce the curvature of the line, but in fact in this case you just need to find the right degree of the polynomial. You are one of the few who have shown the real essence of regularization in linear regression and the bottom line is that we simply fine the model by exchanging the bias for a lower variance through slope changes.
    By the way, real overfitting in regression can be well observed in data with a large number of features, some of which strongly correlate with each other, as well as a relatively small number of samples, and in this case that L1/L2/Lasso will be useful.
    Thank you so much for a very good explanation.

  • @chyldstudios
    @chyldstudios 4 роки тому +61

    The visualization really sells it.

  • @flavio4923
    @flavio4923 2 роки тому +1

    I've never been good with this kind of math/statistics because when I encounter the book formulas I tend to forget or not understand the symbols. Your videos make it possible to go beyond the notation and to learn the idea behind these concepts to apply them in machine learning. Thank you !

  • @Azureandfabricmastery
    @Azureandfabricmastery 4 роки тому +30

    Hello Josh, Ridge and Lasso clearly visualized :) I must say that if one thing that makes your videos clearly explained to curious minds like me is that the visual illustrations that you provide in your stat videos. Glad. Thank you very much for your efforts.

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

  • @lakshitakamboj198
    @lakshitakamboj198 3 роки тому +4

    Thanks, josh for this amazing video. I promise to support this channel once I land a job offer as a data scientist. This is the only video on youtube, that practically shows all the algo's.

    • @statquest
      @statquest  3 роки тому

      Thank you and Good luck!

  • @philwebb59
    @philwebb59 3 роки тому +4

    Best visuals ever! No matter how much I think I know about stats, I always learn something from your videos. Thanks.

    • @statquest
      @statquest  3 роки тому

      Thank so much! BAM! :)

  • @baharehbehrooziasl9517
    @baharehbehrooziasl9517 3 місяці тому +1

    I thought I was familiar with the concept of regularization, but your videos always help me grasp the concept more easily and, of course, deeper!

  • @hemaswaroop7970
    @hemaswaroop7970 4 роки тому +35

    Fantastic, Josh!! Thank you very very much. We all owe you a lot many thanks. "I" owe you a lot. 😊😊👍👍

  • @mansoorbaig9232
    @mansoorbaig9232 4 роки тому +2

    You are awesome Josh. This one always bothered me why L1 would make coefficients to 0 and not L2 and you explained it so simply.

  • @aliaghayari2294
    @aliaghayari2294 10 місяців тому +1

    dude is creating quality videos and replies to every comment!
    talk about dedication!
    thanks a lot

  • @srs.shashank
    @srs.shashank 3 роки тому +2

    As a result when the slope becomes 0 for large lambda in lasso, then we can use lasso for feature selection.
    Nice Video Josh!!.

  • @ehg02
    @ehg02 4 роки тому +70

    Can we start a petition to change the lasso and ridge names to absolute value penalty and squared penalty pwease?

    • @statquest
      @statquest  4 роки тому +6

      That would be awesome! :)

    • @JoaoVitorBRgomes
      @JoaoVitorBRgomes 4 роки тому +1

      @@statquest I am listening to u on spotify

    • @statquest
      @statquest  4 роки тому +2

      @@JoaoVitorBRgomes Bam!

  • @siddhu2605
    @siddhu2605 3 роки тому +1

    Your are a super professor and I'll give you a infinity BAM !!!!!!!!!!!. I really like the way your repeat the earlier discussed topic to refresh the student memory and that really helpful and you have a-lot of patience. Once again you proved that a picture is worth a thousand words.

    • @statquest
      @statquest  3 роки тому

      Thank you very much! :)

  • @Spamdrew128
    @Spamdrew128 3 роки тому +3

    I needed this information for my data science class and didn't expect such a well crafted and humorous video!
    You are doing great work sir!

  • @chunchen3450
    @chunchen3450 4 роки тому +5

    Just found this channel today, great illustrations! Thanks for keeping the voice speed down, that makes me easy to follow!

  • @geovannarojas2580
    @geovannarojas2580 4 роки тому +3

    These videos are so clear and fun, they helped me a lot with modeling and statistic in biology.

  • @Ganeshkakade454
    @Ganeshkakade454 2 роки тому

    Just wanna say..U r my Guru..means Teacher..In data Science ...more love to u from India

  • @ngochua6679
    @ngochua6679 3 роки тому +1

    Fortunately, I asked you :)
    I agree squared and absolute penalty are better word choices for these regularization methods. Thanks again for making my ML at Scale a tad bit easier.

    • @statquest
      @statquest  3 роки тому

      BAM! Thank you very much! :)

  • @arjunpukale3310
    @arjunpukale3310 4 роки тому +1

    And thats the reason why lasso does a kind of feature selection and sets many weights to 0 compared to ridge regression. And now I know the reason behind it thanks a lot❤

  • @Sello_Hunter
    @Sello_Hunter 3 роки тому +2

    This explained everything i needed to know in 9 minutes. Absolute genius, thank you!

    • @statquest
      @statquest  3 роки тому +1

      Glad it was helpful!

  • @cat-.-
    @cat-.- 4 роки тому +1

    I just became the 104th patron of the channel!

    • @statquest
      @statquest  4 роки тому

      TRIPLE BAM!!! Thank you very much!!! :)

  • @r0cketRacoon
    @r0cketRacoon 4 місяці тому

    OMG!!! I've always thought that Ridge is a better method for solving overfitting because it introduces squared penalty to cost function that reduces weights more heavily and faster close to 0. Now you've changed my mind

    • @statquest
      @statquest  4 місяці тому

      bam! They both have strengths and weaknesses.

  • @markevans5648
    @markevans5648 4 роки тому +6

    Great work Josh! Your songs get me every time.

  • @Imran_et_al
    @Imran_et_al Рік тому +1

    The explanation can't be any better than this....!

  • @vishnuprakash9196
    @vishnuprakash9196 Рік тому +1

    The best. Definitely gonna come back and donate once I land a job.

  • @xvmmazy4398
    @xvmmazy4398 10 місяців тому +1

    Dude you succeed at helping me and at making that thing funny as I'm struggling with my ML homework, thank you so much

    • @statquest
      @statquest  10 місяців тому

      Glad I could help!

  • @chrissmith1152
    @chrissmith1152 4 роки тому +5

    incredible videos, been watching all of your videos during quarantine for my future job interview. Still waiting for the time series tho. Thanks sir

  • @bikramsarkar1484
    @bikramsarkar1484 4 роки тому +1

    You are a life saver! I have been trying to understand this for years now!! Thanks a ton!!!

  • @ruonanzheng2019
    @ruonanzheng2019 2 роки тому +1

    Thank you, regularization seris videos from 2018 to 2020 are so helpful.😀

  • @Jack-mz7ox
    @Jack-mz7ox 3 роки тому +1

    This is the perfect explanation I am searching for why L1 can be used for feature importance!!!

  • @tvbala00s27
    @tvbala00s27 2 роки тому +3

    Thanks a lot for this wonderful lesson...loved it ..seeing how the function behaves with different parameters makes it etched in the memory

  • @shamshersingh9680
    @shamshersingh9680 4 місяці тому +1

    Hi Josh, pse accept my heartfelt thanks to such a wonderful video. I guess your videos are an academy in itself. Just follow along your videos and BAM!! you are a master of Data Science and Machine Learning. 👏

  • @tymothylim6550
    @tymothylim6550 3 роки тому +1

    Thank you very much for this video! It helped me visually understand how Lasso regression can remove some predictors from the final model!

  • @sravanlankoti5244
    @sravanlankoti5244 Рік тому +1

    Thanks for taking out time and explaining ML concepts in an amazing manner with clear visualizations.
    Great work.

    • @statquest
      @statquest  Рік тому +4

      WOW! Thank you so much for supporting StatQuest! TRIPLE BAM!!! :)

    • @heteromodal
      @heteromodal Рік тому +1

      @@statquest Hey Josh! What's your preferred way of being supported? Would Paypal be better than Patreon?

    • @statquest
      @statquest  Рік тому

      @@heteromodal It's really up to you - whatever is more convenient and whether or not you want to be a long time supporter or not.

    • @heteromodal
      @heteromodal Рік тому

      @@statquest I meant assuming i make a fixed sum donation - would you see more of it through PP or Patreon :)

    • @statquest
      @statquest  Рік тому +1

      @@heteromodal If it's a one-time donation, than PayPal is probably the best.

  • @eminatabeypeker6305
    @eminatabeypeker6305 3 роки тому +1

    You are really doing great great job. This channel is the best way to learn a lot, right and important things in a short time.

    • @statquest
      @statquest  3 роки тому

      Thank you very much! And thank you for your support!! BAM! :)

  • @ZinzinsIA
    @ZinzinsIA 2 роки тому +1

    Great, many thanks, very understandable and clear. it gave me a good intuiton of how lasso regression shrinks some varables to zero.

  • @jeffchoi6179
    @jeffchoi6179 4 роки тому +1

    The best visualization I've ever seen

  • @RaviShankar-jm1qw
    @RaviShankar-jm1qw 4 роки тому +4

    You simply amaze me with each of your videos. The best part is the way you explain stuff is so original and simple. Will really love if you could also pen down a book on AI/ML. Would be a bestseller i reckon for sure. Keep up the good work and enlightening us :)

    • @statquest
      @statquest  4 роки тому +1

      Wow, thank you!

    • @rainymornings
      @rainymornings Рік тому

      This aged very well (he has a book now lol)

  • @drpkmath12345
    @drpkmath12345 4 роки тому +4

    Ridge regression! Good topic to cover as always!

  • @thedanglingpointer8411
    @thedanglingpointer8411 4 роки тому +1

    God of explanation !!! 🙏🏻🙏🏻🙏🏻 Awesome stuff 🙂🙂

  • @pmsiddu
    @pmsiddu 4 роки тому +1

    Very well explained this one video cleared all my doubts along with practical calculations and visualization. Kudos for the great job.

  • @yidong7706
    @yidong7706 2 роки тому +1

    thanks to this video I finally understand why lasso and ridge have the so called shrinking effect.

  • @mihailtegovski4028
    @mihailtegovski4028 4 роки тому +1

    You should receive a Nobel Prize.

  • @ismailelboujaddaini
    @ismailelboujaddaini 10 місяців тому +1

    Thank you so much
    Blessing from Spain/Morocco

  • @AhmedKhaled-xp7dm
    @AhmedKhaled-xp7dm 5 місяців тому

    Amazing series on regularization (As usual)
    I just didn't quite understand why in the ridge regression the weights/parameters never ever reach zero, I didn't give it much thought but it didn't pop right at me like it usually does in your videos lol, but again great series!

  • @Physicsope875
    @Physicsope875 3 місяці тому +1

    Mind Blowing! Thank you for such valuable content

  • @kzengineai
    @kzengineai 4 роки тому +1

    your videos're very explanatory for studying this field...

  • @albertomontori2863
    @albertomontori2863 3 роки тому +1

    this video.....you are my savior ❤️❤️❤️

  • @gavinaustin4474
    @gavinaustin4474 4 роки тому +1

    Really enjoying these videos, Josh. Please keep 'em coming. Although I understand the distinction between correlation and interaction, I'd be interested to see how you might explain it in your inimitable fashion.

    • @statquest
      @statquest  4 роки тому +1

      I'll put that on the to-do list.

    • @gavinaustin4474
      @gavinaustin4474 4 роки тому

      @@statquest I'm pushing my luck here, but one more item, if I may: the difference between PCA and factor analysis. Often, these are distinguished in general terms (e.g., they are concerned with the total variance vs the shared variance, respectively), but I think that the best way to distinguish them would be to apply both methods to the same data set. I would be most interested in seeing that done.

  • @mathildereynes8508
    @mathildereynes8508 4 роки тому +3

    Could be interesting to see the explaination in the case of a multidimensional problem with more than 2 d features, but very nice video!

    • @neillunavat
      @neillunavat 4 роки тому +1

      Be grateful we've got such a nice guy.

  • @ganpatinatrajan5890
    @ganpatinatrajan5890 Рік тому +2

    Excellent Explanations 👍👍👍
    Great work 👍👍👍

  • @manjushang
    @manjushang 3 роки тому

    ‘Unfortunately no one asked me ‘ 😀 .
    Unique content . Hats off !

    • @manjushang
      @manjushang 3 роки тому

      Also it will be of great help if you explain the following points.
      1. How the lasso regression excludes the useless variables.
      2. How the ridge regression do a little better when most variables are useful
      Thanks,
      Manjusha

    • @statquest
      @statquest  3 роки тому

      Thanks!

  • @--..__
    @--..__ 4 роки тому

    L1 and L2 norm are very common phrases. if you aren't familiar with them that is on you... They are much more clear language as it makes it immediately clear that this is just a distance defined by whichever norm you are using in your space. calling it square or absolute value obfuscates the fact that it is a norm and not some other motivation.

  • @PerfectPotential
    @PerfectPotential 4 роки тому +2

    "I got ... calling a young StatQuest phone" 😁
    (The Ladys might love your work fam.)

  • @blameitonben
    @blameitonben Рік тому

    But how do we pick the right penalty? As a college professor in econ, your lectures and dry humor are perfect for me as I tool up in ML.

    • @statquest
      @statquest  Рік тому +1

      We use cross validation to test a bunch of different penalties and select the one that performs the best.

  • @sane7263
    @sane7263 Рік тому +1

    That's a great video, Josh!
    6:10 they should definitely have asked you 😂

  • @berkceyhan5031
    @berkceyhan5031 2 роки тому +1

    I first like your videos then watch them!

  • @siddhantk007
    @siddhantk007 4 роки тому +1

    Your videos are super intuitive.. thanks alot sir

  • @omnesomnibus2845
    @omnesomnibus2845 4 роки тому +3

    Really excellent video Josh. You consistently do a great job, and I appreciate it. Could you make a video showing the use of Ridge regression and especially Lasso regression in parameter selection? I had to do that once, and it is complicated. From your example it seems that using neither penalty gives you the best response. So, in what circumstances do you want to use the regression to improve your result? If you are using lasso regression to find the top 3 predictive parameters, how does this work? What are the dangers? How do you optimally use it? A complicated subject for sure! I'm sorry if this is covered in your videos on Lasso and Ridge regression individually, I am watching them next. I agree with your naming convention btw, squared and absolute-value penalty is MUCH more intuitive!

    • @statquest
      @statquest  4 роки тому

      Watch the other regularization videos first. I cover some of what you would like to know in about parameter selection in my video on Elastic-Net in R: ua-cam.com/video/ctmNq7FgbvI/v-deo.html

    • @omnesomnibus2845
      @omnesomnibus2845 4 роки тому

      @@statquest I will check out those videos, thanks. I actually did use elastic net regularization. The whole issue is complex (for somebody without a decent stats background) because the framework of how everything works isn't covered very well AND simply anywhere that I could find, without going down several pretty deep rabbit holes. Some of the parameter selections that I remember were suggested depended on the assumption that the parameters were independent, which was NOT the case in my situation. I'm still not sure what the best approach would have been.

    • @omnesomnibus2845
      @omnesomnibus2845 4 роки тому

      @@statquest As an additional note, I've always found that examples and exercises are even more important than theory, while theory is essential at times too. In many math classes concepts were laid out in formal and generalized glory, but I couldn't get the concept at all until I put hard numbers or examples to it. It's probably not the subject of your channel or in your interest, but I think some really hand-holding examples of using these concepts in some kaggle projects, or going through what some interesting papers did, would be a great way of bringing the theory and the real world together.

    • @statquest
      @statquest  4 роки тому +2

      @@omnesomnibus2845 I do webinars that focus on the applied side of all these concepts. So we can learn the theory, and then practice it with real data.

    • @omnesomnibus2845
      @omnesomnibus2845 4 роки тому +1

      @@statquest That's great!

  • @josherickson5446
    @josherickson5446 4 роки тому +1

    Dude you're killing it!

  • @kseniyaesepkina3734
    @kseniyaesepkina3734 2 роки тому +1

    Just an incredible explanation!

  • @baharb5321
    @baharb5321 3 роки тому +1

    Awesome! And I should mention actually: We are asking YOU!"

  • @stevelittle7219
    @stevelittle7219 3 роки тому +1

    Love the They Might Be Giants-esque intro.

  • @praveerparmar8157
    @praveerparmar8157 3 роки тому +13

    "Unfortunately, no one asked me" 🤣🤣🤣

  • @anshpujara14
    @anshpujara14 4 роки тому +1

    Can you do a lecture on Kohonen Self Organising Maps?

  • @sunritjana4573
    @sunritjana4573 3 роки тому +1

    Thanks a lot for thes awesome videos, you deserver milllion followers, and a lot of credits :)
    I just love these and they are KISS. so simple and understandable. I owe you a lot of thanks and credits :D

    • @statquest
      @statquest  3 роки тому

      Thank you so much 😀!

  • @tanbui7569
    @tanbui7569 3 роки тому +2

    Thank you for your work as always. Its AWESOME. I just got some questions. Why is there a kink in the SSR curve for Lasso Regression ? Is it because we are adding lambda * |slope| which is a linear component ? And Does the curve for Ridge Regression stay parabola because we are adding lambda*slope^2 which is a parabola component ?

    • @statquest
      @statquest  3 роки тому +1

      I believe that is correct.

    • @sudeshnasen2731
      @sudeshnasen2731 2 роки тому

      Hi. Great video! I had the same query as to why we cannot see a similar kink in curve in the Ridge Regression CF vs Slope curve.

  • @niyousha6868
    @niyousha6868 4 роки тому +1

    Thank you Josh

  • @rishipatel7998
    @rishipatel7998 2 роки тому +1

    This guy is amazing.... BAM!!!

  • @deojeetsarkar2006
    @deojeetsarkar2006 4 роки тому +1

    You're god of studies

  • @sebastiencrepel5032
    @sebastiencrepel5032 3 роки тому +1

    Great videos. Very helpful. Thanks !

  • @NRienadire
    @NRienadire 3 роки тому +1

    Great videos, thank you very much!!!

  • @adibhatlavivekteja2679
    @adibhatlavivekteja2679 4 роки тому +3

    Explain stats to a 10-year-old?
    Me: "You kid, Subscribe and drill through all the content of StatQuest with Josh Starmer"

  • @katielui131
    @katielui131 9 місяців тому +1

    This is amazing - thanks for this

  • @myunghee7231
    @myunghee7231 4 роки тому +3

    thank you!!!!! i have question do you have time series model or time series forecasting?? please please make those video with you amazing explanation!!!! :):)

    • @statquest
      @statquest  4 роки тому +2

      I don't have one yet, but it's on the to-do list. :)

    • @myunghee7231
      @myunghee7231 4 роки тому +1

      StatQuest with Josh Starmer ohhh good to hear!!!! thank you for response! i will wait the time series !!

  • @younghoe6849
    @younghoe6849 4 роки тому +1

    Great master, thanks for your great effort

  • @SzehangChoi
    @SzehangChoi 3 місяці тому +1

    You saved my degree

  • @martinflo
    @martinflo Рік тому +1

    Hi thanks for the great videos. I don't understand why we get this "kink" on Lasso regression and not Ridge

    • @statquest
      @statquest  Рік тому

      The "kink" comes from the absolute value function.

  • @jaskaransingh0304
    @jaskaransingh0304 Рік тому +1

    Great explanation!

  • @mik8760
    @mik8760 4 роки тому +1

    THAT IS SOOOOOO GOOD MAN

  • @samuelhughes804
    @samuelhughes804 4 роки тому +1

    All your videos are great, but the regularization ones have been a fantastic help. Was wondering if you were planning any on selective inference from lasso models? That would complete the set for me haha

  • @statisticaldemystic6817
    @statisticaldemystic6817 4 роки тому +1

    Very well done as usual.

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

  • @joxa6119
    @joxa6119 2 роки тому +2

    So you mean this statquest answered the question "why Lasso regression can remove useless variable and Ridge cannot", am I right?

  • @suryan5934
    @suryan5934 4 роки тому +1

    Amazing video as always Josh! Just to be sure if I got it correctly, the plot between RSS error and slope represents a parabola in 2D. So when we do the same thing in 3D i.e. With 2 parameters, does it represent the same bowl shaped cost function that we try to minimise?

  • @adhiyamaanpon4168
    @adhiyamaanpon4168 4 роки тому +1

    Hey josh!! Can u plz make a video for K-modes algorithm for categorical variables(unsupervised learning) with an example..plz?

  • @richardxue1506
    @richardxue1506 2 місяці тому

    Ridge Regression (L2-norm) never shrinks coefficients to zero, but Lasso Regression (L1-norm) may shrink coefficients to zero, and that's the reason Lasso can perform feature selection while Ridge can't.

  • @oanphong61
    @oanphong61 2 роки тому +1

    Thank you very much!

  • @TheCheukhin
    @TheCheukhin 4 роки тому +1

    Underrated

  • @juanmanuelpedrosa53
    @juanmanuelpedrosa53 4 роки тому +1

    Hi Josh, would you consider explain the nuances of arithmetic, geometric and harmonic means?. I couldn't find it on the quests.

  • @premnathkn1976
    @premnathkn1976 4 роки тому +2

    Clear and apt..

  • @usamahussain4461
    @usamahussain4461 2 роки тому +1

    Excellent.
    I have just one question. In case of L1 penalty, isn't the line with lambda equal 40 (or slope 0) giving a bad line? I mean with blue line, we were getting a better fit since it didn't completely ignore weight in predicting height and sum of residuals is smallest?

    • @statquest
      @statquest  2 роки тому

      What time point, minutes and seconds, are you asking about?

    • @usamahussain4461
      @usamahussain4461 2 роки тому

      @@statquest 7:16

    • @statquest
      @statquest  2 роки тому +1

      @@usamahussain4461 Yes. For both L1 and L2 you need to test different values for lambda, including setting it to 0, to find the optimal value.

  • @priyanatraj5634
    @priyanatraj5634 4 роки тому

    Thank you for helping us to understand statistics! May I request for a video on Dirichlet regression?

  • @RAJATTHEPAGAL
    @RAJATTHEPAGAL 4 роки тому

    L2= weight penalisation (smooths out weight losss curve but and reduces overfitting , but higher lambda can kill model training)
    L1 = weight imputation (dragging it to zero, useful for learnable ignoring of variables, useful for high dimensional data at times)
    .
    I have used both of these earlier with similar mindset. Earlier even in Deep Learning i used a similar analogy to reason about what was happening. The visualisation really did helped, so just wanted to know is this simplistic way of viewing the behaviour makes sense ??? Or am I missing something ....

  • @cenyingyang1611
    @cenyingyang1611 3 роки тому

    Hi Josh, great videos as always!!! I am wondering is there any guidelines on how we should pick which one? Under what cases will ridge be better and under what cases will lasso be better?

    • @statquest
      @statquest  3 роки тому

      I talk about that a little bit in this video: ua-cam.com/video/ctmNq7FgbvI/v-deo.html