The guys at AWS need to do more of these videos. The entire platform has so much more functionality since this video was published. It way harder to navigate the documentation, and these old videos do not help very much.
How can data pre-processing be done before making predictions on new raw data using the hosted model? We would need to transform the new data and bring it in the same format as the data used for training before making predictions. Some of this pre-processing can be be quite compute intensive or dependent on third party libraries (e.g. nltk or scikit-learn). Should the pre-processing of the new raw data be done with lambda functions (custom deployment package) or is there a better approach? Custom Deployment packages have a size limit wich makes things very difficult. Should this code be put on an EC2 and then send from Lambda to EC2 and then to the hosted model?
Hi, Thanks for vid. How exactly you uploaded to the notebook github repo?? can't understand. This video expects some knowledges. Can you please recommend your other videos to understand more of what you are talking about? Many thanks in advance!
Amazon, what just happened? Why this crap is actually public? Everyone makes bugs during coding, but showing it to everyone....is this why the delivery of my packages is messed up all the time? Hope you won't go into self-driving cars or space wars with Elon.
The guys at AWS need to do more of these videos. The entire platform has so much more functionality since this video was published. It way harder to navigate the documentation, and these old videos do not help very much.
How can we keep the SageMaker notebooks under revision control? E.g. how can we track changes to the SageMaker notebook on GitHub?
Excellent presentation! I learned so much! Thanks! Nice to know it's applicable in predicting something in actual images. Epic!
How can data pre-processing be done before making predictions on new raw data using the hosted model? We would need to transform the new data and bring it in the same format as the data used for training before making predictions. Some of this pre-processing can be be quite compute intensive or dependent on third party libraries (e.g. nltk or scikit-learn). Should the pre-processing of the new raw data be done with lambda functions (custom deployment package) or is there a better approach? Custom Deployment packages have a size limit wich makes things very difficult. Should this code be put on an EC2 and then send from Lambda to EC2 and then to the hosted model?
Did you find an answer for this?
hello freinds.. i wonder iif you got the answers for this points ?
Great video! How to train and deploy models on SageMaker without using jupyter notebook instances (serverless)?
what file formats can be imported in sagemaker tool?
for eg. CSV,JSon etc......
where is the next part which actually deploys the model?
Any guide available for On Demand prediction which is opposite to RealTime Predictor?
Unrelated but I really like your google chrome productivity theme background. Link?
Hi,
Thanks for vid.
How exactly you uploaded to the notebook github repo?? can't understand.
This video expects some knowledges. Can you please recommend your other videos to understand more of what you are talking about?
Many thanks in advance!
This is advance!!
You can unload the zip and use the terminal to unzip it. or use the git command to pull it
you can find the terminal in New dropdown
What's the difference between amazon sagemaker and ec2 ?
sagemaker is one of the service of AWS, while EC2 is a virtual server for computing, just like processor of ur computer
how easy is it to use keras in sagemaker?
Just horrible confusing
Can I code in R ?
Amazon sagemaker offering is just soo bad and not easy to setup
Amazon, what just happened? Why this crap is actually public? Everyone makes bugs during coding, but showing it to everyone....is this why the delivery of my packages is messed up all the time? Hope you won't go into self-driving cars or space wars with Elon.