Standardization vs Normalization Clearly Explained!

Поділитися
Вставка
  • Опубліковано 29 сер 2022
  • Let's understand feature scaling and the differences between standardization and normalization in great detail.
    #machinelearning #datascience #artificialintelligence
    For more videos please subscribe -
    bit.ly/normalizedNERD
    Support me if you can ❤️
    www.buymeacoffee.com/normaliz...
    Join our discord -
    / discord
    Facebook -
    / nerdywits
    Instagram -
    / normalizednerd
    Twitter -
    / normalized_nerd

КОМЕНТАРІ • 101

  • @NedSar85
    @NedSar85 Рік тому +61

    This video should be nominated to the UA-cam Oscars/Grammy awards....

  • @xTurqoise
    @xTurqoise Рік тому +41

    Also in Principal Component Analysis, scaled features are very important because we search for the principal axes that have the highest variance. So if we have one feature in [0,1] and the other one in [1, 100], then the latter one has a much higher variance, even though it may not contain much information to be kept by the PCA.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +4

      Great point! Feature scaling is very important in pca also.

  • @severtone263
    @severtone263 2 місяці тому +2

    Your clarity is amazing. This helps! Sub earned

  • @AbheeBrahmnalkar
    @AbheeBrahmnalkar 9 місяців тому +4

    This is the first video I watched and man you have crushed it. This intuitive explanation of math was a joy to watch. Please keep them coming.

  • @Anna-uh7qx
    @Anna-uh7qx 4 місяці тому +1

    How many more people would understand math if we had explanations like this. I feel like I have been reading math papers written in French, and you just spoke in English for me. Gosh, THANK-YOU.

  • @Mutual_Information
    @Mutual_Information Рік тому +10

    I was wondering where you’ve been! Nice to see you back to posting.
    Well covered topic - it’s easy to overlook standardization and normalization thinking they are simple. They have some important subtleties

    • @PritishMishra
      @PritishMishra Рік тому +1

      I saw you today in Yannic's channel as well, nice to see you again.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +2

      Thanks a lot mate! Really happy to be able to upload again :D❤️

    • @taotaotan5671
      @taotaotan5671 Рік тому

      Hey DJ, we are waiting for you also!

    • @Mutual_Information
      @Mutual_Information Рік тому

      @@taotaotan5671 lol coming soon!!

    • @stayinthepursuit8427
      @stayinthepursuit8427 6 місяців тому

      A standardization makes the original distribution look more normal . It doesn't just make a zero mean and 1 stdev.

  • @jullienbeaufondcamacho2055
    @jullienbeaufondcamacho2055 8 місяців тому +1

    Great, specially good to explain the misconception with non linear transformations which for some reasons is constantly used in conversations as normalization/standarization

  • @lenko_me
    @lenko_me Рік тому

    Very nice video! Everything became clear as soon as I watched this

  • @vemundrye8999
    @vemundrye8999 2 місяці тому

    You're doing amazing work here, hopefully one day you will get the recognition you deserve

  • @elotimmi4942
    @elotimmi4942 7 днів тому

    I just love your channel name so much

  • @vantuantran225
    @vantuantran225 Рік тому +2

    thanks man, It's help me so much to understand about normalization
    Very helpful

  • @ramblingsofadegenerate1174
    @ramblingsofadegenerate1174 29 днів тому

    Great explanation boss helped a lot chaliye jaao guru

  • @TheEudesFilho
    @TheEudesFilho 20 днів тому

    Great lesson! Thank you so much for you video

  • @user-by8sn4km5q
    @user-by8sn4km5q 26 днів тому

    WOWW! Absolutely loved this! Thanks

  • @jb_makesgames2264
    @jb_makesgames2264 Рік тому +3

    Good video - your description and explanation is good. However relating the basic explanations to real world problems would be helpful for users. Also using a partial distribution to calculate things such as volatility based on only the negative change is interesting. Also using curve fitting of data to determine parameters for trading and models is also interesting

  • @thomasbates9189
    @thomasbates9189 7 місяців тому

    High quality content. Thank you!

  • @TranquilSeaOfMath
    @TranquilSeaOfMath 6 місяців тому

    Very nice explanation and demonstration. Good topic.

  • @sailakshmicholanilath9794
    @sailakshmicholanilath9794 Місяць тому

    Thanks to you I understood why feature scaling is imp, thank legend

  • @syifasyuhaidahazman2384
    @syifasyuhaidahazman2384 Рік тому

    love it. thanks so much for the explanation

  • @gactve2110
    @gactve2110 5 місяців тому +1

    Great videos, dude!
    It's a shame we no longer get this great content

  • @Hitman1Sniper
    @Hitman1Sniper Рік тому

    So glad to see you back !

  • @user-qr4be3sl8u
    @user-qr4be3sl8u Рік тому

    An excellent explanation...Thanks a lot for sharing ....

  • @arijitRC473
    @arijitRC473 Рік тому +1

    Great to see you back bro ! ✌️

  • @aditi3601
    @aditi3601 Рік тому +1

    Absolutely loved the explanation!

  • @ashraf_isb
    @ashraf_isb 4 місяці тому

    sir olease make more videos, your sessions are very helpful

  • @aravindr9109
    @aravindr9109 7 місяців тому

    Normalisation became new normal to me, great job dude!!!!

  • @vmkkannan
    @vmkkannan Рік тому

    Good Explaintion... thank you very much 😊😊😊😊😊😊

  • @Ruhsaran
    @Ruhsaran 6 місяців тому

    Excellent! Thanks.

  • @DarkZeuss
    @DarkZeuss Рік тому +1

    Thanks man for the video. this was with no doubt very helpful.
    however i was wondering how do you make all these animations ?
    Thanks in advance for you kindness.

  • @devonrd
    @devonrd 11 днів тому

    Great video!

  • @kienchung8189
    @kienchung8189 3 місяці тому

    coolest presentation!

  • @NathanCats777
    @NathanCats777 Рік тому +2

    Hi there thanks a lot! I have one question on min-max normalization as I m using Stata. When I use the formula, shall I take into consideration the actual min and max values of the variable, or I should consider the potential/feasible range of values the variable can assume? E.g. I have one variable that can take values -100,+100, yet in my dataset the min is -12 and the max is 34.

  • @ts.nathan7786
    @ts.nathan7786 Рік тому

    Very good explanation.

  • @analyticseveryday4019
    @analyticseveryday4019 Рік тому

    extremely beautiful viz , teaching methodology is amazing too. I too run ana analytics channel, but u inspired me more

  • @jacobnyonyintono1442
    @jacobnyonyintono1442 11 місяців тому

    This guy explained something my lectures failed in years, in 5 minutes

  • @xeniosm4549
    @xeniosm4549 Рік тому

    Hi. For deep learning, it best to do min-max normalization (i.e. stretch values to 0-1) or max normalization (i.e. only divide by max to keep within 0-1)? I see a problem with the former approach, as a single outlying value can significantly skew all the rest of the values, making them not very comparable to the reference values.

  • @JackSee-wr3le
    @JackSee-wr3le 10 місяців тому

    Excellent visuals!

  • @DennisKorolevych
    @DennisKorolevych Рік тому

    Jesus, thats so great. Im totally new to data science and ML and Im trying to take it slow to properly understand everything. This video was super great in doing that. I picked up new knowledge that will be helpful for when Im writing my own ML algorithm (probably KNN based image classfication)

  • @brianthomas9148
    @brianthomas9148 Рік тому +1

    Your explanation was damn neat!

  • @deborahfranza2925
    @deborahfranza2925 10 місяців тому

    AWESOME VIDEO TYSM YOU'RE AWESOME

  • @ryguywy
    @ryguywy Рік тому

    excellent visualization, thanks!

  • @suyogshinde9050
    @suyogshinde9050 Рік тому

    Good that you are back!😎

  • @shivanigawande4953
    @shivanigawande4953 Рік тому

    Yayyyyy! Thanks for an amazing video.

  • @hawardizayee3263
    @hawardizayee3263 5 місяців тому

    May I ask about the technologies that have been used to create this content ?
    I really appreciate sharing.

  • @Skandawin78
    @Skandawin78 3 місяці тому

    great video, to the point with great visuals, subscribed.. Btw, how did you make these nice graphics?

  • @mohammadzeeshan992
    @mohammadzeeshan992 Рік тому

    Good video, content animation are amazing.

  • @farhanfaiyaz8471
    @farhanfaiyaz8471 8 місяців тому

    Hey! I wanted to know which software/ tools you used to make videos like this?

  • @fosheimdet
    @fosheimdet Рік тому

    Great videos! May I ask what software you use to create your equations/animations?

    • @neha4206
      @neha4206 Рік тому

      I think he uses manim

  • @brianthomas9148
    @brianthomas9148 Рік тому +1

    could you please tell me what software you used for these visualizations

  • @tim_faith
    @tim_faith 7 місяців тому

    thank you

  • @buildlackey
    @buildlackey 3 місяці тому

    superb !

  • @tanvirhasanmonir1627
    @tanvirhasanmonir1627 Рік тому

    Very helpful

  • @nipunikalnu8645
    @nipunikalnu8645 3 місяці тому

    what software do you use for animations?

  • @Vikram-wx4hg
    @Vikram-wx4hg Рік тому

    Very nice!

  • @memelol1859
    @memelol1859 Рік тому

    Omggggg ur back!!!

  • @cyberpunk_edgerunners
    @cyberpunk_edgerunners Рік тому

    thanks bro

  • @roshantonge1952
    @roshantonge1952 Рік тому

    How can you so perfect in explaining

  • @valerionetophdcandidate
    @valerionetophdcandidate 6 місяців тому

    Very good. I have a doubt. I would love to hear your comment on it.
    In recent months, I have been reflecting on the apparent prevalence of certain predatory mega-journals, in particular MDPI's Sustainability, which stands out as the journal with the most publications on various topics, according to various tourism bibliometrics. However, this observation has led me to consider the need for further analysis.
    Specifically, it has caught my attention that when using the percentage of publications in relation to the specific research topic in percentage terms (number of articles on a topic divided by the total number of articles published), the magnitude of the contribution decreases drastically. To illustrate this point, let me present a hypothetical example:
    Journal A has published 10 articles on prospect theory in the last five years, but its total output is 600 articles.
    In comparison, Journal B has published 25 articles on prospect theory in the same period, but its total publication volume exceeds 49,000 articles.
    Some bibliometrics would say that Journal B is the one that publishes the most, however, it is just a matter of gaining by quantity. I gave the journals weights based on their percentages (Weight of journal = Percentage of Journal / Highest Percentage among journals) then I did the min-max normalisation (Normalised weight = (Weight of Journal−Min Weight) / (Max Weight−Min Weight)), Then I created a Weighted Metric with Normalisation (multiplying the normalised * their weight). The use of min-max normalisation in this one is correct? Do you think there is a better approach?

  • @lampeve
    @lampeve Місяць тому

    You are the best!

  • @S.G.2
    @S.G.2 7 днів тому

    yes ty

  • @sabastianmoore8467
    @sabastianmoore8467 Рік тому

    Love the sound effects! lol

  • @Anagha-pm3fu
    @Anagha-pm3fu 21 день тому

    do you use manim?

  • @matheusalvessoaresdecarval1834

    i'm new to machine learning and theres something i dont quite understand:
    if you scale the X(input), does it affect the Y(output)? In a real life scenario where i want to make a prediction with my model, wont the scalling affect the results? if i shrink the input wont the output also be smaller?

    • @yamanarslanca8325
      @yamanarslanca8325 11 місяців тому

      By looking at what you are saying: No, I don't think so (don't take my word though, I am new at ML). I'd say your weights will be computed accordingly. But I read that even scaling your outputs (before the training) is a thing, there are people who do that.

  • @shrinivassampathmuthupalan8283

    Well explained.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    Excellent!

  • @benradmer7668
    @benradmer7668 3 години тому

    Can someone here help me with my data preprocessing project or know where i can find help? I am so stuck and cant get over 70%. i really wann do well but dont really know what else do in preprocessing

  • @craftbalika3568
    @craftbalika3568 Рік тому +1

    Great to get back nerdy notifications...

  • @736939
    @736939 Рік тому

    Please add NLP course.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +1

      Hey, have you checked this playlist?
      ua-cam.com/play/PLM8wYQRetTxCCURc1zaoxo9pTsoov3ipY.html
      Feel free to suggest more topics!

  • @vmendesmagalhaes
    @vmendesmagalhaes 11 місяців тому

    I really hope you are fine now. Your videos helped me a lot in several times. Easily you could be a teacher if you want to. Thanks!

  • @donghyunlee-zg8hx
    @donghyunlee-zg8hx 9 місяців тому

    2:15

  • @patralichakraborty1295
    @patralichakraborty1295 Рік тому +1

    ♥️♥️♥️

  • @avranj
    @avranj Рік тому

    Bro, but how can we decide which technique to use when? and if selecting normalization then which normalization such as----min-max etc.....? could you please elaborate this.

  • @rayaneaboud9043
    @rayaneaboud9043 3 місяці тому

    gg budd you opened new horizons for me

  • @ritira20mila
    @ritira20mila Рік тому

    Sorry, I can't understand at 3:10 : Good old [what?] algorithm

  • @muhtasirimran
    @muhtasirimran Рік тому

    Ahhhahaaa, I was sad seeing your last video was a year ago. Your visualization is really cool and as good as intuitive ml. But he stopped making videos 3 years ago

  • @suyashrahatekar4964
    @suyashrahatekar4964 2 місяці тому

    blud comes after 1 year and does not come back even after another year gone past .

  • @Rockefeller.69
    @Rockefeller.69 Рік тому

    Khan Academy 2.0?

  • @nitika9769
    @nitika9769 18 днів тому

    i wanna be as smart as you

  • @azingo2313
    @azingo2313 10 місяців тому

    Don't want to see your face. Just slides please. Also avoid background music.