Standardization vs Normalization Clearly Explained!

Поділитися
Вставка
  • Опубліковано 29 сер 2022
  • Let's understand feature scaling and the differences between standardization and normalization in great detail.
    #machinelearning #datascience #artificialintelligence
    For more videos please subscribe -
    bit.ly/normalizedNERD
    Support me if you can ❤️
    www.buymeacoffee.com/normaliz...
    Join our discord -
    / discord
    Facebook -
    / nerdywits
    Instagram -
    / normalizednerd
    Twitter -
    / normalized_nerd

КОМЕНТАРІ • 104

  • @NedSar85
    @NedSar85 Рік тому +66

    This video should be nominated to the UA-cam Oscars/Grammy awards....

  • @xTurqoise
    @xTurqoise Рік тому +43

    Also in Principal Component Analysis, scaled features are very important because we search for the principal axes that have the highest variance. So if we have one feature in [0,1] and the other one in [1, 100], then the latter one has a much higher variance, even though it may not contain much information to be kept by the PCA.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +4

      Great point! Feature scaling is very important in pca also.

  • @severtone263
    @severtone263 2 місяці тому +4

    Your clarity is amazing. This helps! Sub earned

  • @AbheeBrahmnalkar
    @AbheeBrahmnalkar 9 місяців тому +5

    This is the first video I watched and man you have crushed it. This intuitive explanation of math was a joy to watch. Please keep them coming.

  • @Anna-uh7qx
    @Anna-uh7qx 4 місяці тому +2

    How many more people would understand math if we had explanations like this. I feel like I have been reading math papers written in French, and you just spoke in English for me. Gosh, THANK-YOU.

  • @vemundrye8999
    @vemundrye8999 3 місяці тому

    You're doing amazing work here, hopefully one day you will get the recognition you deserve

  • @jullienbeaufondcamacho2055
    @jullienbeaufondcamacho2055 9 місяців тому +1

    Great, specially good to explain the misconception with non linear transformations which for some reasons is constantly used in conversations as normalization/standarization

  • @lenko_me
    @lenko_me Рік тому

    Very nice video! Everything became clear as soon as I watched this

  • @Mutual_Information
    @Mutual_Information Рік тому +11

    I was wondering where you’ve been! Nice to see you back to posting.
    Well covered topic - it’s easy to overlook standardization and normalization thinking they are simple. They have some important subtleties

    • @PritishMishra
      @PritishMishra Рік тому +1

      I saw you today in Yannic's channel as well, nice to see you again.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +2

      Thanks a lot mate! Really happy to be able to upload again :D❤️

    • @taotaotan5671
      @taotaotan5671 Рік тому

      Hey DJ, we are waiting for you also!

    • @Mutual_Information
      @Mutual_Information Рік тому

      @@taotaotan5671 lol coming soon!!

    • @stayinthepursuit8427
      @stayinthepursuit8427 7 місяців тому

      A standardization makes the original distribution look more normal . It doesn't just make a zero mean and 1 stdev.

  • @vantuantran225
    @vantuantran225 Рік тому +2

    thanks man, It's help me so much to understand about normalization
    Very helpful

  • @elotimmi4942
    @elotimmi4942 25 днів тому

    I just love your channel name so much

  • @Hitman1Sniper
    @Hitman1Sniper Рік тому

    So glad to see you back !

  • @user-by8sn4km5q
    @user-by8sn4km5q Місяць тому

    WOWW! Absolutely loved this! Thanks

  • @TheEudesFilho
    @TheEudesFilho Місяць тому

    Great lesson! Thank you so much for you video

  • @thomasbates9189
    @thomasbates9189 8 місяців тому

    High quality content. Thank you!

  • @jb_makesgames2264
    @jb_makesgames2264 Рік тому +3

    Good video - your description and explanation is good. However relating the basic explanations to real world problems would be helpful for users. Also using a partial distribution to calculate things such as volatility based on only the negative change is interesting. Also using curve fitting of data to determine parameters for trading and models is also interesting

  • @gactve2110
    @gactve2110 6 місяців тому +1

    Great videos, dude!
    It's a shame we no longer get this great content

  • @TranquilSeaOfMath
    @TranquilSeaOfMath 7 місяців тому

    Very nice explanation and demonstration. Good topic.

  • @arijitRC473
    @arijitRC473 Рік тому +1

    Great to see you back bro ! ✌️

  • @ramblingsofadegenerate1174
    @ramblingsofadegenerate1174 Місяць тому

    Great explanation boss helped a lot chaliye jaao guru

  • @syifasyuhaidahazman2384
    @syifasyuhaidahazman2384 Рік тому

    love it. thanks so much for the explanation

  • @user-qr4be3sl8u
    @user-qr4be3sl8u Рік тому

    An excellent explanation...Thanks a lot for sharing ....

  • @aditi3601
    @aditi3601 Рік тому +1

    Absolutely loved the explanation!

  • @aravindr9109
    @aravindr9109 7 місяців тому

    Normalisation became new normal to me, great job dude!!!!

  • @ashraf_isb
    @ashraf_isb 4 місяці тому

    sir olease make more videos, your sessions are very helpful

  • @devonrd
    @devonrd 29 днів тому

    Great video!

  • @suyogshinde9050
    @suyogshinde9050 Рік тому

    Good that you are back!😎

  • @brianthomas9148
    @brianthomas9148 Рік тому +1

    Your explanation was damn neat!

  • @analyticseveryday4019
    @analyticseveryday4019 Рік тому

    extremely beautiful viz , teaching methodology is amazing too. I too run ana analytics channel, but u inspired me more

  • @kienchung8189
    @kienchung8189 4 місяці тому

    coolest presentation!

  • @DennisKorolevych
    @DennisKorolevych Рік тому

    Jesus, thats so great. Im totally new to data science and ML and Im trying to take it slow to properly understand everything. This video was super great in doing that. I picked up new knowledge that will be helpful for when Im writing my own ML algorithm (probably KNN based image classfication)

  • @vmkkannan
    @vmkkannan Рік тому

    Good Explaintion... thank you very much 😊😊😊😊😊😊

  • @JackSee-wr3le
    @JackSee-wr3le 10 місяців тому

    Excellent visuals!

  • @sailakshmicholanilath9794
    @sailakshmicholanilath9794 Місяць тому

    Thanks to you I understood why feature scaling is imp, thank legend

  • @Ruhsaran
    @Ruhsaran 7 місяців тому

    Excellent! Thanks.

  • @ts.nathan7786
    @ts.nathan7786 Рік тому

    Very good explanation.

  • @shivanigawande4953
    @shivanigawande4953 Рік тому

    Yayyyyy! Thanks for an amazing video.

  • @mohammadzeeshan992
    @mohammadzeeshan992 Рік тому

    Good video, content animation are amazing.

  • @deborahfranza2925
    @deborahfranza2925 10 місяців тому

    AWESOME VIDEO TYSM YOU'RE AWESOME

  • @DarkZeuss
    @DarkZeuss Рік тому +1

    Thanks man for the video. this was with no doubt very helpful.
    however i was wondering how do you make all these animations ?
    Thanks in advance for you kindness.

  • @Skandawin78
    @Skandawin78 4 місяці тому

    great video, to the point with great visuals, subscribed.. Btw, how did you make these nice graphics?

  • @alextonev4145
    @alextonev4145 9 днів тому

    Thank you!

  • @NathanCats777
    @NathanCats777 Рік тому +2

    Hi there thanks a lot! I have one question on min-max normalization as I m using Stata. When I use the formula, shall I take into consideration the actual min and max values of the variable, or I should consider the potential/feasible range of values the variable can assume? E.g. I have one variable that can take values -100,+100, yet in my dataset the min is -12 and the max is 34.

  • @jacobnyonyintono1442
    @jacobnyonyintono1442 Рік тому

    This guy explained something my lectures failed in years, in 5 minutes

  • @ryguywy
    @ryguywy Рік тому

    excellent visualization, thanks!

  • @fosheimdet
    @fosheimdet Рік тому

    Great videos! May I ask what software you use to create your equations/animations?

    • @neha4206
      @neha4206 Рік тому

      I think he uses manim

  • @Vikram-wx4hg
    @Vikram-wx4hg Рік тому

    Very nice!

  • @xeniosm4549
    @xeniosm4549 Рік тому

    Hi. For deep learning, it best to do min-max normalization (i.e. stretch values to 0-1) or max normalization (i.e. only divide by max to keep within 0-1)? I see a problem with the former approach, as a single outlying value can significantly skew all the rest of the values, making them not very comparable to the reference values.

  • @buildlackey
    @buildlackey 4 місяці тому

    superb !

  • @memelol1859
    @memelol1859 Рік тому

    Omggggg ur back!!!

  • @tanvirhasanmonir1627
    @tanvirhasanmonir1627 Рік тому

    Very helpful

  • @tim_faith
    @tim_faith 8 місяців тому

    thank you

  • @hawardizayee3263
    @hawardizayee3263 6 місяців тому

    May I ask about the technologies that have been used to create this content ?
    I really appreciate sharing.

  • @cyberpunk_edgerunners
    @cyberpunk_edgerunners Рік тому

    thanks bro

  • @sabastianmoore8467
    @sabastianmoore8467 Рік тому

    Love the sound effects! lol

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    Excellent!

  • @farhanfaiyaz8471
    @farhanfaiyaz8471 9 місяців тому

    Hey! I wanted to know which software/ tools you used to make videos like this?

  • @lampeve
    @lampeve Місяць тому

    You are the best!

  • @brianthomas9148
    @brianthomas9148 Рік тому +1

    could you please tell me what software you used for these visualizations

  • @shrinivassampathmuthupalan8283

    Well explained.

  • @roshantonge1952
    @roshantonge1952 Рік тому

    How can you so perfect in explaining

  • @S.G.2
    @S.G.2 25 днів тому

    yes ty

  • @valerionetophdcandidate
    @valerionetophdcandidate 6 місяців тому

    Very good. I have a doubt. I would love to hear your comment on it.
    In recent months, I have been reflecting on the apparent prevalence of certain predatory mega-journals, in particular MDPI's Sustainability, which stands out as the journal with the most publications on various topics, according to various tourism bibliometrics. However, this observation has led me to consider the need for further analysis.
    Specifically, it has caught my attention that when using the percentage of publications in relation to the specific research topic in percentage terms (number of articles on a topic divided by the total number of articles published), the magnitude of the contribution decreases drastically. To illustrate this point, let me present a hypothetical example:
    Journal A has published 10 articles on prospect theory in the last five years, but its total output is 600 articles.
    In comparison, Journal B has published 25 articles on prospect theory in the same period, but its total publication volume exceeds 49,000 articles.
    Some bibliometrics would say that Journal B is the one that publishes the most, however, it is just a matter of gaining by quantity. I gave the journals weights based on their percentages (Weight of journal = Percentage of Journal / Highest Percentage among journals) then I did the min-max normalisation (Normalised weight = (Weight of Journal−Min Weight) / (Max Weight−Min Weight)), Then I created a Weighted Metric with Normalisation (multiplying the normalised * their weight). The use of min-max normalisation in this one is correct? Do you think there is a better approach?

  • @_dion_
    @_dion_ 9 днів тому

    excellent.

  • @vmendesmagalhaes
    @vmendesmagalhaes Рік тому

    I really hope you are fine now. Your videos helped me a lot in several times. Easily you could be a teacher if you want to. Thanks!

  • @craftbalika3568
    @craftbalika3568 Рік тому +1

    Great to get back nerdy notifications...

  • @nipunikalnu8645
    @nipunikalnu8645 4 місяці тому

    what software do you use for animations?

  • @matheusalvessoaresdecarval1834

    i'm new to machine learning and theres something i dont quite understand:
    if you scale the X(input), does it affect the Y(output)? In a real life scenario where i want to make a prediction with my model, wont the scalling affect the results? if i shrink the input wont the output also be smaller?

    • @yamanarslanca8325
      @yamanarslanca8325 Рік тому

      By looking at what you are saying: No, I don't think so (don't take my word though, I am new at ML). I'd say your weights will be computed accordingly. But I read that even scaling your outputs (before the training) is a thing, there are people who do that.

  • @benradmer7668
    @benradmer7668 18 днів тому

    Can someone here help me with my data preprocessing project or know where i can find help? I am so stuck and cant get over 70%. i really wann do well but dont really know what else do in preprocessing

  • @736939
    @736939 Рік тому

    Please add NLP course.

    • @NormalizedNerd
      @NormalizedNerd  Рік тому +1

      Hey, have you checked this playlist?
      ua-cam.com/play/PLM8wYQRetTxCCURc1zaoxo9pTsoov3ipY.html
      Feel free to suggest more topics!

  • @Anagha-pm3fu
    @Anagha-pm3fu Місяць тому

    do you use manim?

  • @user-xk8zu3jb2z
    @user-xk8zu3jb2z 4 дні тому

    Amazing explanation! Thank you.
    The datasets get normalized just like the speaker! (a joke, couldn't help it)

  • @patralichakraborty1295
    @patralichakraborty1295 Рік тому +1

    ♥️♥️♥️

  • @donghyunlee-zg8hx
    @donghyunlee-zg8hx 10 місяців тому

    2:15

  • @avranj
    @avranj Рік тому

    Bro, but how can we decide which technique to use when? and if selecting normalization then which normalization such as----min-max etc.....? could you please elaborate this.

  • @rayaneaboud9043
    @rayaneaboud9043 3 місяці тому

    gg budd you opened new horizons for me

  • @muhtasirimran
    @muhtasirimran Рік тому

    Ahhhahaaa, I was sad seeing your last video was a year ago. Your visualization is really cool and as good as intuitive ml. But he stopped making videos 3 years ago

  • @ritira20mila
    @ritira20mila Рік тому

    Sorry, I can't understand at 3:10 : Good old [what?] algorithm

  • @suyashrahatekar4964
    @suyashrahatekar4964 2 місяці тому

    blud comes after 1 year and does not come back even after another year gone past .

  • @Rockefeller.69
    @Rockefeller.69 Рік тому

    Khan Academy 2.0?

  • @nitika9769
    @nitika9769 Місяць тому

    i wanna be as smart as you

  • @azingo2313
    @azingo2313 11 місяців тому

    Don't want to see your face. Just slides please. Also avoid background music.