How to Deploy ML model to AWS Sagemaker with mlflow and Docker - Step by step

Поділитися
Вставка
  • Опубліковано 26 жов 2024

КОМЕНТАРІ • 37

  • @DataScienceGarage
    @DataScienceGarage  3 роки тому +2

    Thank you for watching this video. Check the Github repo for this video. It will be updated and modified upon your request. Let me know if any!
    github.com/vb100/deploy-ml-mlflow-aws/

  • @MirrorNeuron
    @MirrorNeuron 3 роки тому +1

    Thanks for the brilliant tutorial. Can you also please add a bonus tutorial on how to automate the prediction steps. So that whenever there is a new data it will trigger the prediction.

  • @johnliang3786
    @johnliang3786 Рік тому

    great and detailed tutorial for beginner like me, thanks

  • @MirrorNeuron
    @MirrorNeuron 3 роки тому +2

    For others while running the command "build-and-push-container" make sure your docker engine is running on your local machine.

    • @nibinjoseph2136
      @nibinjoseph2136 6 місяців тому

      Its getting failed for me every single time.

  • @TheDataScienceChannel
    @TheDataScienceChannel 2 роки тому

    Using both mlflow and sagemaker separately in the past. The two work best when use with the other

  • @krzysztofformella
    @krzysztofformella Рік тому

    Great video, really informative! Thank you. Unfortunetely outdated because of new mlflow's methods to perform deployment.

  • @abdjanshvamdjsj
    @abdjanshvamdjsj 2 роки тому

    Great video. Just one question, when you did mlflow sagemake build-and-push, it didn't package the model in the image? The image only had the necessary environment installed. That's why in the deploy script you had to provide both the URI to model and the image separately.

  • @scar2080
    @scar2080 2 роки тому

    Brilliant and clean delivery.. you are awesome mate..
    Can you make another video on using mlflow + sagemaker + ( Gensim topic modelling )/FBPhrophet ?

    • @DataScienceGarage
      @DataScienceGarage  2 роки тому +1

      Hi! Thanks for your feedback, really appreciate! It is a very interesting topic you're suggesting. Incorporate Gensim topic modelling into pipeline would be cool. Thanks for that, I will invest my time into this, and if will prepare something, I'll come back with tutorial here - sometime in short future. :)

    • @scar2080
      @scar2080 2 роки тому

      @@DataScienceGarage thanks.

  • @MirrorNeuron
    @MirrorNeuron 3 роки тому +1

    Hi there, I have few more libraries in the train.py and also have pip installed them. But it doesn't show up in the requirements.txt file. Why is that so? kindly let me know.

    • @DataScienceGarage
      @DataScienceGarage  3 роки тому +1

      If I understood correctly, could you add these libraries manually in requerements.txt? You can add as many libraries as you want, depend on your actual project.

    • @MirrorNeuron
      @MirrorNeuron 3 роки тому

      @@DataScienceGarage thank you. I just did. also can you point to a docker based deployment where the training ML model is done within sagemaker

  • @mdh5213
    @mdh5213 2 роки тому

    Hey, can you help me with something? after deploying my model when I am about to test my model it says "CancelledError: Session has been closed."

  • @tanb13
    @tanb13 2 роки тому

    I am getting the error 'Failed to lookup host: 354565582869' while running the command mlflow sagemaker build-and-push-container. Can you please advise how to fix it?

  • @pravinkumar54
    @pravinkumar54 3 роки тому

    thank you so much. much needed

  • @nibinjoseph2136
    @nibinjoseph2136 8 місяців тому

    Excellent!!!!

  • @deepakmk663
    @deepakmk663 3 роки тому +1

    This was the go to video for mlops using mlflow and aws. I have a quick question, is there a way to deploy pertained DL model without training job?
    If so please let me know

    • @DataScienceGarage
      @DataScienceGarage  3 роки тому

      Yes, it is possible. Pre-trained model is one of the artifacts generated after training. You should put your pre-trained model, create conda.yaml file, requirements.txt file. And then push these artifacts into AWS ECR.

    • @deepakmk663
      @deepakmk663 3 роки тому

      @@DataScienceGarage Thank you for your response. I tried reading model using pytorch and log model in mlflow. This is generating yaml file with default libraries.

  • @vishalwaghmare3130
    @vishalwaghmare3130 Рік тому

    Very helpful video :-)

  • @scar2080
    @scar2080 2 роки тому

    Hey Mate! Quick question? Can you tell me how to choose your compute instance for training in this situation? Lets say i want to use 64 GB Ram and 16 CPU Just a scenario Hehehehe.. how can we do it?? i see you did training offline right? can we also do it online?? Please throw me some light here

  • @victorgabrielsouzabarbosa5488
    @victorgabrielsouzabarbosa5488 2 роки тому

    AMAZING!!!!!!

  • @philtoa334
    @philtoa334 3 роки тому

    Very Nice thanks.

  • @sugammehta0301
    @sugammehta0301 2 роки тому +1

    How do you integrate flask api with this?

    • @DataScienceGarage
      @DataScienceGarage  2 роки тому +1

      There is no Flask in this tutorial. The idea is to create a Docker image which contain model artifacts with the help of mlfflow and deploy it to AWS SageMaker through Amazon ECR.

    • @sugammehta0301
      @sugammehta0301 2 роки тому

      @@DataScienceGarageok great😁but would really love to have a tutorial on integration with flask soon...thanks

    • @DataScienceGarage
      @DataScienceGarage  2 роки тому +1

      @@sugammehta0301 there could be couple of options. One, to use AWS Beamstalk, another - use Kubernetes to initialize Flask API. I will create this kind of tutorial one day :)

    • @sugammehta0301
      @sugammehta0301 2 роки тому

      @@DataScienceGarage great thank you :))