Thank you for your time and effort in making this video! May I have some comments: 1) The good: - The content is great - About the video itself, everything is great (e.g., light, speech, clarity of explanation, .. and so on) 2) The not really good: - People, looking for solutions for our showcase projects, don't see what we are looking for in the video and probably go back to some tutorials about using Flask to serve our models. Here's why: we probably want to make a web app or a mobile app (e.g., using react native) and include the link to the resume. Your content isn't finished, so it isn't convincing enough.
Would love to see an update to this video, perhaps with a more complex model framework such as PyTorch or Huggingface! BentoML has updated some of its infrastructure to handle custom objects, which can store the tokenizer for these models, but there's a gap in resources that demonstrate how to utilize that tokenizer in the `predict()` service function definition.
Thank you for the tutorial, I have a question, in my project I have multiple edge devices, can I use BentoML's runner instance to run on multiple nodes?
Dear sir, can I ask why should we use BentoML, while there're loads of robust serving frameworks? Or in other words, can you compare BentoML with others... Thank you!
I had this error How can I chck the server log to know what exactly the error rediction: "An error has occurred in BentoML user code when handling this request, find the error details in server logs"
How do you deploy voice recognition system (as in your previous videos) on bentos? Here at 13:39 on line 16 input and output is numpyNDarray, what would it be for audio data?
It depends on the type of audio representation you use as input (e.g., spectrograms, waveform). In most cases, it would be NumpyNDarray. For the output, again, it depends. Most likely you'll have one-hot encoding as in the example of this video. For that, you can use NumpyNDarray. Hope this helps!
hey i need to know how does python work when you have two separate train and test files. like we train on one file and then test on another csv file. please guide if you know how does that work
Hello I I have a question I want to make a project on removing stutter from a speech signal what data structures or tools do I need in order to make this?
@@jawadmansoor6064 thank u for the reply! I myself am a stutterer so I just thought of making this. Impossible? Can't we filter out the redundant signals or something like that?
@@haoshoku8496 Not through DSP, though we might be able to train an AI model that can filter or even generate a new voice. Do you speak English by the way? I am interested in this project.
@@jawadmansoor6064 I think there is a dataset for this but I don't know machine learning yet so I decided that if I could do that by DSP and if DSP will not work here so machine learning is maybe the only solution and Yes i do speak English!
Thank you for your time and effort in making this video!
May I have some comments:
1) The good:
- The content is great
- About the video itself, everything is great (e.g., light, speech, clarity of explanation, .. and so on)
2) The not really good:
- People, looking for solutions for our showcase projects, don't see what we are looking for in the video and probably go back to some tutorials about using Flask to serve our models. Here's why: we probably want to make a web app or a mobile app (e.g., using react native) and include the link to the resume. Your content isn't finished, so it isn't convincing enough.
Would love to see an update to this video, perhaps with a more complex model framework such as PyTorch or Huggingface!
BentoML has updated some of its infrastructure to handle custom objects, which can store the tokenizer for these models, but there's a gap in resources that demonstrate how to utilize that tokenizer in the `predict()` service function definition.
This is super cool! I can't wait to share it with my team.
Best video on bentoml!
Thank you!
Great explanation, thank you!
Thank you for sharing
Thank you for the tutorial, I have a question, in my project I have multiple edge devices, can I use BentoML's runner instance to run on multiple nodes?
Thanks 👍👍
thank you for sharing!
Dear sir, can I ask why should we use BentoML, while there're loads of robust serving frameworks? Or in other words, can you compare BentoML with others... Thank you!
Can you do the blind source separation
I had this error How can I chck the server log to know what exactly the error
rediction: "An error has occurred in BentoML user code when handling this request, find the error details in server logs"
I'm stuck: from mldeployment import training
ImportError: cannot import name 'training' from 'mldeployment' (unknown location)
same issue
How do you deploy voice recognition system (as in your previous videos) on bentos?
Here at 13:39 on line 16 input and output is numpyNDarray, what would it be for audio data?
It depends on the type of audio representation you use as input (e.g., spectrograms, waveform). In most cases, it would be NumpyNDarray. For the output, again, it depends. Most likely you'll have one-hot encoding as in the example of this video. For that, you can use NumpyNDarray. Hope this helps!
@@ValerioVelardoTheSoundofAI Thank you I will try this
hey i need to know how does python work when you have two separate train and test files. like we train on one file and then test on another csv file. please guide if you know how does that work
Why did you make a json file of the input?
I am not able to containerize using bentoml..
Hello I I have a question I want to make a project on removing stutter from a speech signal what data structures or tools do I need in order to make this?
What is stutter in speech? If a person, (child or old) stutters I think it would be impossible to clear it.
@@jawadmansoor6064 thank u for the reply!
I myself am a stutterer so I just thought of making this. Impossible? Can't we filter out the redundant signals or something like that?
@@haoshoku8496 Not through DSP, though we might be able to train an AI model that can filter or even generate a new voice.
Do you speak English by the way? I am interested in this project.
@@jawadmansoor6064 I think there is a dataset for this but I don't know machine learning yet so I decided that if I could do that by DSP and if DSP will not work here so machine learning is maybe the only solution and Yes i do speak English!
@@haoshoku8496 Thank you. I will look for it now.
To be very honest, BentoML doesn't seem to be adding any value! Why would anyone switch from FastAPI to this?
Same question
ƤRO𝓂O𝕤ᗰ
Thank you for sharing