Production Inference Deployment with PyTorch

Поділитися
Вставка
  • Опубліковано 25 сер 2024
  • After you've built and trained a PyTorch machine learning model, the next step is to deploy it someplace where it can be used to do inferences on new input. This video shows the fundamentals of PyTorch production deployment, including Setting your model to evaluation mode; TorchScript, PyTorch's optimized model representation format; using PyTorch's C++ front end to deploy without interpreted language overhead; and TorchServe, PyTorch's solution for scaled deployment of ML inference services.

КОМЕНТАРІ • 7

  • @konataizumi5829
    @konataizumi5829 3 роки тому +14

    0:00 - Intro
    0:30 - Evaluation Mode
    2:25 - TorchScript
    5:34 - TorchScript and C++
    7:37 - TorchServe
    8:39 - TorchServe example

  • @JosepOriol24
    @JosepOriol24 3 роки тому

    Very informative! Thank you!

  • @mostafagvarzaneh3723
    @mostafagvarzaneh3723 2 роки тому +5

    in minute 5:17 shouldnt we save scripted_model (scripted_model.save('my_scripted_model.pt'))?

    • @aixueer4ever
      @aixueer4ever 8 місяців тому

      exactly what I want to ask. i think you are right.

  • @cristhian4513
    @cristhian4513 3 роки тому

    Definitely torchserve is a good option to deployment :))

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 роки тому +4

    Audio is quite low

    • @robosergTV
      @robosergTV 6 місяців тому

      The issue is with your hardware. Use a chrome audio plug-in or an external audio device with good amplification.