Understanding Pipeline in Machine Learning with Scikit-learn (sklearn pipeline)

Поділитися
Вставка
  • Опубліковано 4 вер 2024

КОМЕНТАРІ • 26

  • @AnkitGupta005
    @AnkitGupta005 2 роки тому +5

    Short and crisp. Thank you!

  • @fabianaltendorfer11
    @fabianaltendorfer11 Рік тому +4

    that's a great introduction to pipelines! Thanks

  • @kianaliaghat7740
    @kianaliaghat7740 2 роки тому +3

    thanks for your short, useful introduction!
    it helped me a lot

  • @maxwellpatten9227
    @maxwellpatten9227 7 місяців тому +2

    This is excellent. Thank you

  • @Hajar1992ful
    @Hajar1992ful 2 роки тому +1

    Thank you for this useful video!

  • @sebacortes8812
    @sebacortes8812 Рік тому +1

    muchas gracias saludos desde chile!!

  • @muhammadjamalahmed8664
    @muhammadjamalahmed8664 3 роки тому +2

    Love your tutorials..

  • @hiba8484
    @hiba8484 Рік тому +1

    Thanks, its really helpfull

  • @aszx-tv4pq
    @aszx-tv4pq 3 місяці тому

    HI there, very happy with this channel could you explain a bit simpler what is pipeline part!

  • @adiver_
    @adiver_ 8 місяців тому +2

    hello
    As you have imported polynomial features and transformed the independent variable(X_train) for it be fitted in a polynomial regression then why did you put linearregression() as the estimator in the last tuple of the list?? shouldn't you have use polyfit function or something else?
    NOTE: I am a beginner here , so the doubts can be silly.

    • @DrDataScience
      @DrDataScience  8 місяців тому +1

      Good question! We have already created all the polynomial terms that we need, i.e., x, x^2, x^3, etc. Thus, we can now view this as a linear regression problem with respect to the "new/artificial" features.

    • @adiver_
      @adiver_ 8 місяців тому

      I appreciate your reply , it cleared exactly what i was asking. Thanks
      @@DrDataScience

    • @adiver_
      @adiver_ 8 місяців тому

      @@DrDataScience one more thing I need to ask if you can spare some time, I have seen people do parameter scaling using StandardScaler() before polynomial features and estimator in a Pipeline argument, so is the scaling a necessary step or we can skip it??

  • @gabrielmarchioli4669
    @gabrielmarchioli4669 2 роки тому

    Great video. Helped me a lot

  • @rishidixit7939
    @rishidixit7939 3 місяці тому

    Why are all arrays converted to column matrices while applying sklearn

    • @DrDataScience
      @DrDataScience  3 місяці тому +1

      Because each column corresponds to a feature or attribute of your data set. Thus, the number of elements in that column vector is equal to the number of samples.

  • @nachoeigu
    @nachoeigu 2 роки тому +1

    I have a big one question: What is the difference of build a Machine Learning application with Pipeline and to build a machine learning application with a OOP technique? I see that it is the same.

    • @DrDataScience
      @DrDataScience  2 роки тому +1

      Everything in Python is defined as a class so we use OOP all the time. Pipeline provides a nice flexible way to combine multiple transformers and an estimator.

  • @burakakay6632
    @burakakay6632 Рік тому

    Thank you :=}