Thank you for watching this video. Check the Github repo for this video. It will be updated and modified upon your request. Let me know if any! github.com/vb100/deploy-ml-mlflow-aws/
Thanks for the brilliant tutorial. Can you also please add a bonus tutorial on how to automate the prediction steps. So that whenever there is a new data it will trigger the prediction.
Great video. Just one question, when you did mlflow sagemake build-and-push, it didn't package the model in the image? The image only had the necessary environment installed. That's why in the deploy script you had to provide both the URI to model and the image separately.
Hi! Thanks for your feedback, really appreciate! It is a very interesting topic you're suggesting. Incorporate Gensim topic modelling into pipeline would be cool. Thanks for that, I will invest my time into this, and if will prepare something, I'll come back with tutorial here - sometime in short future. :)
Hi there, I have few more libraries in the train.py and also have pip installed them. But it doesn't show up in the requirements.txt file. Why is that so? kindly let me know.
If I understood correctly, could you add these libraries manually in requerements.txt? You can add as many libraries as you want, depend on your actual project.
I am getting the error 'Failed to lookup host: 354565582869' while running the command mlflow sagemaker build-and-push-container. Can you please advise how to fix it?
This was the go to video for mlops using mlflow and aws. I have a quick question, is there a way to deploy pertained DL model without training job? If so please let me know
Yes, it is possible. Pre-trained model is one of the artifacts generated after training. You should put your pre-trained model, create conda.yaml file, requirements.txt file. And then push these artifacts into AWS ECR.
@@DataScienceGarage Thank you for your response. I tried reading model using pytorch and log model in mlflow. This is generating yaml file with default libraries.
Hey Mate! Quick question? Can you tell me how to choose your compute instance for training in this situation? Lets say i want to use 64 GB Ram and 16 CPU Just a scenario Hehehehe.. how can we do it?? i see you did training offline right? can we also do it online?? Please throw me some light here
There is no Flask in this tutorial. The idea is to create a Docker image which contain model artifacts with the help of mlfflow and deploy it to AWS SageMaker through Amazon ECR.
@@sugammehta0301 there could be couple of options. One, to use AWS Beamstalk, another - use Kubernetes to initialize Flask API. I will create this kind of tutorial one day :)
Thank you for watching this video. Check the Github repo for this video. It will be updated and modified upon your request. Let me know if any!
github.com/vb100/deploy-ml-mlflow-aws/
Thanks for the brilliant tutorial. Can you also please add a bonus tutorial on how to automate the prediction steps. So that whenever there is a new data it will trigger the prediction.
great and detailed tutorial for beginner like me, thanks
Happy that ii helped, thanks for watching!
For others while running the command "build-and-push-container" make sure your docker engine is running on your local machine.
Its getting failed for me every single time.
Using both mlflow and sagemaker separately in the past. The two work best when use with the other
Great video, really informative! Thank you. Unfortunetely outdated because of new mlflow's methods to perform deployment.
Thank you for such feedback! Appreciate! :)
Great video. Just one question, when you did mlflow sagemake build-and-push, it didn't package the model in the image? The image only had the necessary environment installed. That's why in the deploy script you had to provide both the URI to model and the image separately.
Brilliant and clean delivery.. you are awesome mate..
Can you make another video on using mlflow + sagemaker + ( Gensim topic modelling )/FBPhrophet ?
Hi! Thanks for your feedback, really appreciate! It is a very interesting topic you're suggesting. Incorporate Gensim topic modelling into pipeline would be cool. Thanks for that, I will invest my time into this, and if will prepare something, I'll come back with tutorial here - sometime in short future. :)
@@DataScienceGarage thanks.
Hi there, I have few more libraries in the train.py and also have pip installed them. But it doesn't show up in the requirements.txt file. Why is that so? kindly let me know.
If I understood correctly, could you add these libraries manually in requerements.txt? You can add as many libraries as you want, depend on your actual project.
@@DataScienceGarage thank you. I just did. also can you point to a docker based deployment where the training ML model is done within sagemaker
Hey, can you help me with something? after deploying my model when I am about to test my model it says "CancelledError: Session has been closed."
I am getting the error 'Failed to lookup host: 354565582869' while running the command mlflow sagemaker build-and-push-container. Can you please advise how to fix it?
thank you so much. much needed
Thank for such feedback, appreciate it!
Excellent!!!!
Thanks for feedback! :)
This was the go to video for mlops using mlflow and aws. I have a quick question, is there a way to deploy pertained DL model without training job?
If so please let me know
Yes, it is possible. Pre-trained model is one of the artifacts generated after training. You should put your pre-trained model, create conda.yaml file, requirements.txt file. And then push these artifacts into AWS ECR.
@@DataScienceGarage Thank you for your response. I tried reading model using pytorch and log model in mlflow. This is generating yaml file with default libraries.
Very helpful video :-)
Thanks a lot! Glad it was useful :)
Hey Mate! Quick question? Can you tell me how to choose your compute instance for training in this situation? Lets say i want to use 64 GB Ram and 16 CPU Just a scenario Hehehehe.. how can we do it?? i see you did training offline right? can we also do it online?? Please throw me some light here
AMAZING!!!!!!
Thanks for watching! Really appreciate your feedback!
Very Nice thanks.
Thanks for watching!
How do you integrate flask api with this?
There is no Flask in this tutorial. The idea is to create a Docker image which contain model artifacts with the help of mlfflow and deploy it to AWS SageMaker through Amazon ECR.
@@DataScienceGarageok great😁but would really love to have a tutorial on integration with flask soon...thanks
@@sugammehta0301 there could be couple of options. One, to use AWS Beamstalk, another - use Kubernetes to initialize Flask API. I will create this kind of tutorial one day :)
@@DataScienceGarage great thank you :))