ML Teach by Doing Lecture 14: One Hot Encoding

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 14

  • @sriram1208
    @sriram1208 3 місяці тому +1

    I am following from day 1. I am here in this lecture today. Thanks a lot for your efforts 🤝

  • @abishekdhakal5786
    @abishekdhakal5786 6 місяців тому +1

    Lecture 14 done.Thanks for this lecture.I used to wonder how can we actually represent any raw data to feed the data to ML algorithm. Through one hot encoding, it's kind of easy to represent those categorical dataset into vectors and finally feed to the algorithm to make it much more efficient.

  • @PraveenKumar-lv6cu
    @PraveenKumar-lv6cu 5 місяців тому +1

    Very Interesting lectures, I am enjoying this lecture series since from 1st day. Thanks Dr. Raj creating such kind of content.

  • @eyeleon
    @eyeleon 7 місяців тому +1

    Don't stop this series please.

    • @vizuara
      @vizuara  7 місяців тому +1

      Thanks for the support!

  • @rajubalasubramaniyam9002
    @rajubalasubramaniyam9002 7 місяців тому +2

    1. One-hot encoding is the size proportional to the categories and is it hardcoded?

    • @vizuara
      @vizuara  7 місяців тому +1

      Yes, that's correct. The size of a one-hot encoded vector is directly proportional to the number of categories in the dataset for a particular feature. Regarding hardcoding: The encoding depend on the data and needs to be determined before use. Once the categories are known and the encoding scheme is established, the vector for each category is indeed fixed. For example, if there are three categories (Cat, Dog, Mouse), and Cat is represented as [1, 0, 0], this encoding is used consistently across the dataset.

  • @MEHEDIHASAN-jc7xm
    @MEHEDIHASAN-jc7xm 7 місяців тому +2

    Sir, I couldn’t feel the mathematical operation for standardizing. Is it possible to give the essence of that theory while you are teaching( I'm a bit uncomfortable and feel alienated with statistical calculations). Kindly If you could give the feel of those statistical calculations side by side or kindly made a dedicated video of all statistical theories of technical terms needed for ML with a feel of those theories , it will really make my day!
    Thanks sir, It’s wonderful going with you!

    • @vizuara
      @vizuara  7 місяців тому +1

      Hello! Yes I am planning to make a detailed Statistics video which will make your conceptual understanding much clearer. Thanks for the feedback. Meanwhile, here is a document you can go through for understanding feature standardization which explains it without complicated Maths: medium.com/@shivanipickl/what-is-feature-scaling-and-why-does-machine-learning-need-it-104eedebb1c9.
      Hope this is helpful!

    • @MEHEDIHASAN-jc7xm
      @MEHEDIHASAN-jc7xm 7 місяців тому

      @@vizuara thanks!

  • @itsamitnitrkl5199
    @itsamitnitrkl5199 6 місяців тому

    Sir I have doubt in standardizing , during standardization we find the of features , or the bits which are in one categorical data , as in your last example , of F1 =only two bits

  • @praveenchaturvedi2761
    @praveenchaturvedi2761 6 місяців тому +1

    lecture `14 done..

  • @prathameshj994
    @prathameshj994 3 місяці тому

    lec 14 done