AutoML with Auto-Keras (14.1)

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 9

  • @amdenis
    @amdenis Рік тому

    Great video, as always. BTW, like that morph in the beginning... you are getting more sophisticated looking with each year!

  • @yasirabdulkareem9844
    @yasirabdulkareem9844 2 роки тому

    Thanks. I have question. I have text classification issue and I want to use auto keras. I want to use word2vec as word embedding technique. How can I do this ?

  • @sarveshdeshpande2056
    @sarveshdeshpande2056 2 роки тому

    Hi Jeff...I am using AUTOKERAS and its executing well but whenever I am trying to save pickle its gives encoder error. while in training time its executing successfully. Can you please advise.

  • @mathman1475
    @mathman1475 2 роки тому

    How would you classify Orange?

  • @cyangtw
    @cyangtw 2 роки тому

    Hi Jeff, I'm currently an engineering graduate student. My current study of sentiment analysis utilizes multiple methods and one of them is combining AutoML, Auto-Kears to be specific. However, many of my peers in my lab discourage me saying that Auto-Keras's results are not reliable and or credible. And I've noticed its limited ability to muti-class and multi-label classification, which I can't use with three or more class or labels classification task. So I'm wondering if there is any source materials or actual application case that I can prove its usability. For 3 or more class or label classification tasks, not sure if you have any experience with successful examples. Thanks.

    • @JimBob1937
      @JimBob1937 2 роки тому +2

      In theory AutoKeras is just doing something like a grid search. Essentially just automating the tasks an ML engineer would do when trying to find the best hyperparameters, so it should be fairly robust. The only time I'd expect it to fail is when your training loss function curve over many Epochs shifts dramatically. Say one particular combination is relatively flat and then dramatically increases in accuracy after X number of epochs. In such a case if your AutoKeras search is more shallow in its epochs, < X, it may miss it. In which case you may just need to push your epochs out more.

    • @cyangtw
      @cyangtw 2 роки тому

      @@JimBob1937 Thanks for your elaboration. That's what I've been trying to present in my lab as well. About your mentioning in the last section, I'm curious that if you push epochs more, won't that be causing overfitting? Although I already generally let it run for 1000 epochs(sometimes just let it runs for days) or I don't specify how many epochs.

    • @JimBob1937
      @JimBob1937 2 роки тому +1

      @@cyangtw , as Jeff mentions, the epochs in AutoKeras are more shallow compared to full training. The epochs should just be set at a reasonable level for AutoKeras to get a hint as to early winners. If you're overfitting then you're running for way too many epochs for the tool. You then take the results of AutoKeras and do a full training, whereby you should be more cautious of overfitting. This is assuming you're going for the more common broad but shallow search, which to me makes sense when hunting for the best hyperparameters. Too many epochs will just waste time. Though, I'm maybe less familiar with the tool than Jeff, so curious as to his thoughts.

    • @JimBob1937
      @JimBob1937 2 роки тому +1

      @@cyangtw it looks like the documentation says it also already has protection in place for overfitting:
      "The number of epochs to train each model during the search. If unspecified, by default we train for a maximum of 1000 epochs, but we stop training if the validation loss stops improving for 10 epochs (unless you specified an EarlyStopping callback as part of the callbacks argument, in which case the EarlyStopping callback you specified will determine early stopping)."
      If the validation stops improving (or goes down) it'll stop training. If you're overfitting your validation is likely to go down as the model loses its generalization, so would likely trigger the early stopping.