Such an awesome video. I've been looking for a little while now and this is exactly what i'm looking for. Additionally, the way you presented everything was super quick and easy to understand (which i appreciate since I'm currently running a fever lol). Either way, you're a life saver, and I want to thank you so much for all your hard work.
Great video, thanks Rob! ... I tried the model in German a few times and it worked quite well but not without errors. One time I took an audio example from Hermann Hesse's wonderful book: Narcissus and Goldmund and the model translated 'Narciss' (German for Narcissus) with 'Nazi'. ... so, I will still read and correct the future results before sending them to my boss. ;-)
Hey Medallion! What’s the best way/library to perform text to speech, speech to text and speech to speech translations between languages. I’m from India, so a model that’s capable of a lot of indigenous languages is necessary. And if possible could you make a video about this?
Thanks for the comment. There is a text to speech library that uses the google api. This one can be used offline: github.com/nateshmbhat/pyttsx3 - as for the different languages, I think it's going to depend a lot on what is already out there. Are the languages part of the whisper library? If so then that's a good start, it allows for some translation and maybe in the future they will add TTS.
Thx for your kind detail explanation!. Could you explain to me how the improvement of a Whisper model works? Do I need text or audio or both?? I would like to improve for the recognition of new words in the specific field I targeted.
Hi Medallion, Thanks for the video. I've followed both of your processes, but when I run I get a FileNotFoundError: [WinError 2] The system cannot find the file specified. I've got my test file in the same folder as my main.py. Any ideas what I need to do to get it to work?
Cool video! I want to get this working for live speech-to-text since it is fast enough to run real-time but it seems like since you can't pass in continuous audio you would run into issues where the model would not have the previous output as context and could easily get cut off mid word. Any ideas for how to tackle that issue?
Hey guys please can anyone help me with this issue. I am trying to run whisper on my machine and I am getting this error in cmd. UserWarning: FP16 is not supported on CPU; using FP32 instead warnings.warn("FP16 is not supported on CPU; using FP32 instead"). I use a windows 10 with gpu RTX2060. Also it seems it runs on my cpu instead of NVIDIA GPU. I created a python virtual environment and pip installed whisper in that virtual environment just for more details.
Do you have any advice for how to fix the 'ModuleNotFoundError: no module named 'torch._C'? I looks around the internet for answers but there's none that works, i even tried different python versions.
Great video 👍, just wanted to know in detail how to use this, and i now seen u r video, i 100% understanded. Btw which software or the thing.. In which you r writing the code ?
I believe that since the model was trained on 30 second clips, the audio must be split before processed though the pipeline. However the built in transcribe method handles that for you.
Hi Rob, thank you for taking the time to share out of the wealth of your knowledge. I tried running the model, and it keeps telling me Numpy not available. I used Pip Install numpy, and I realized that numpy is available. Please, what could the problem be? Thank you. I want to use this for qualitative research. Thank you once again, and I hope to hear from you.
@@robmulla Thank you, Rob. I posted the question on the Q&A page on GitHub. The issue is my python version. I have 3.10, and Pytorch isn't compatible with any version above 3.9. So I needed to downgrade the python version to allow for Pytorch and Numpy in pytorch. Thanks once again.
I'm new to AI models. I want to use Whisper to help my dad who has dysarthria. Would I be able to train Whisper to recognize speech from a dataset of dysarthria speakers? Also, would it be possible for me to then use a text-to-speech to translate the text back?
Thanks for this! I'm fairly new to NLP but already amazed by Whisper. Any idea what the *max_initial_timestamp* argument is used for in the DescribeOptions()? I'm curious to know what the smallest timestamp window is possible to achieve. Anyone know if it's possible to pull timestamps for each word's onset? I'm seeing ranges of 2-5 seconds for defaults on my samples (which are kinda verbose).
Great question. I don't know too much about the details but I did find it in the source code: github.com/openai/whisper/blob/main/whisper/decoding.py#L97 It says "the initial timestamp cannot be later than this"
I recently used a UA-cam whisper subtitle maker on a live stream. You can watch it on my channel. It did vtt format but it think it also had an option for other formations
Hi Medallion! thank you very much. I was expecting that since your last one on Audio data processing in python. Is there a possibility to add a new language? I am currently working on a large audio data set in my mother tongue Fon in West Africa and would like to have some guidance. Best!
These models are trained on extremely large datasets for each language- so if you are looking to have something for a language that isn't in the existing list it would be really hard to train that yourself. Maybe reach out to OpenAI and request that language be added in future versions?
Hi Rob, thanks for sharing this video, I am looking for a linrary/ Api that can convert speech to text from a youtube video and then I would combine the video with the translation of the text in another language. Do you have any idea how I can do it? Is Wissem a good library for that. Ps: the video may last more than an hour. Thanks in advance for your help🙏🏼
I am using MacBook M1 and visual studio, I keep getting "no module named torch". Switch to Jupiter, but then get FP16 is not supported on CPU; using FP32 instead
It comes in various sizes, from tiny (39M) to large (1.5GB). You can find them listed in the repo here: github.com/openai/whisper#available-models-and-languages
Thanks for making this video. I was wondering if you can say the steps to follow to execute these 4 lines of code in GPU. I installed CUDATOOLKIT, NUMBA( I good Graphic card GTX 3050) and followed some examples online, but I failed. Ty and have a great day!
Hey Hamza. It depends on the operating system you are using. Installing CUDA correctly and having it linked in your global path is usually the hardest part. For me, I followed the instructions on the nvidia website. Then I just pip installed the requirements from the whisper repo. Good luck!
Great question! I haven't done it myself but it looks like others have. Checkout this github discussion someone put together code that might be what you are looking for: github.com/openai/whisper/discussions/98
Traceback (most recent call last): File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/__init__.py", line 20, in init eng = _activeEngines[driverName] File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/weakref.py", line 137, in __getitem__ o = self.data[key]() KeyError: 'sapi5' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in start(fakepyfile,mainpyfile) File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start exec(open(mainpyfile).read(), __main__.__dict__) File "", line 2, in File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/__init__.py", line 22, in init eng = Engine(driverName, debug) File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/engine.py", line 30, in __init__ self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug) File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/driver.py", line 50, in __init__ self._module = importlib.import_module(name) File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/drivers/sapi5.py", line 1, in import comtypes.client # Importing comtypes.client will make the gen subpackage ModuleNotFoundError: No module named 'comtypes' [Program finished] Plz help me with this error
Sir ive a questionI want to make a program in python such that first it recognize the text from 20 images one by one and, store the last word from the image text and at the same time it should also recognize the audio from a file(which is currently running at its normal pase) through speech recognition and if it found the last from the image text in the audio at 36 seconds from the start . Then it should press a specific key on the keyboard. This thing continues utill the audio finishes. Can this be possible by using whisper?
Such an awesome video. I've been looking for a little while now and this is exactly what i'm looking for. Additionally, the way you presented everything was super quick and easy to understand (which i appreciate since I'm currently running a fever lol). Either way, you're a life saver, and I want to thank you so much for all your hard work.
Thanks for making this helpful video. I really enjoyed watching it.
Whisper is a huge step forward to local speech recognition.
Appreciate the feedback. Whisper is pretty impressive.
Great video, thanks Rob! ... I tried the model in German a few times and it worked quite well but not without errors. One time I took an audio example from Hermann Hesse's wonderful book: Narcissus and Goldmund and the model translated 'Narciss' (German for Narcissus) with 'Nazi'. ... so, I will still read and correct the future results before sending them to my boss. ;-)
Haha. Love the story. Hopefully these models will just continue to get better.
Hello all ! nice first impression! I ran a 8mins mp3 file and it worked perfectly. I am pretty surprised. q=)
Great to hear! I've been very impressed by whisper too.
Thanks for this valuable video. You deserve more views and likes
Really appreciate that. Share the video with a friend to spread the word 😊
Really nice explanation and demonstration, You sir have a new subscriber (me)
Thanks. Glad to have you as a subscriber
More content like this please! and thank you for the tutorial
Seriously, such an awesome project!!!
Glad you liked it! I appreciate the comment.
Hey Medallion! What’s the best way/library to perform text to speech, speech to text and speech to speech translations between languages. I’m from India, so a model that’s capable of a lot of indigenous languages is necessary. And if possible could you make a video about this?
Thanks for the comment. There is a text to speech library that uses the google api. This one can be used offline: github.com/nateshmbhat/pyttsx3 - as for the different languages, I think it's going to depend a lot on what is already out there. Are the languages part of the whisper library? If so then that's a good start, it allows for some translation and maybe in the future they will add TTS.
Thanks!
Thx for your kind detail explanation!. Could you explain to me how the improvement of a Whisper model works?
Do I need text or audio or both?? I would like to improve for the recognition of new words in the specific field I targeted.
Thanks for providing details. Does it support live streaming audio? Instead of using pre-recorded audio clip can it transcribe the live speech
Great question. I believe there are some packages out there that can do it near real time, but I haven’t used one myself.
Hi Medallion, Thanks for the video.
I've followed both of your processes, but when I run I get a FileNotFoundError: [WinError 2] The system cannot find the file specified. I've got my test file in the same folder as my main.py. Any ideas what I need to do to get it to work?
Interesting. You might be referencing it wrong. It needs to be in the same folder as the script. I’d need to see the full stack trace though.
@@robmulla I get the same error
The issue was not installing FFMPEG properly. thanks for great vid
@Rob I have the same issue.
Cool video! I want to get this working for live speech-to-text since it is fast enough to run real-time but it seems like since you can't pass in continuous audio you would run into issues where the model would not have the previous output as context and could easily get cut off mid word. Any ideas for how to tackle that issue?
That's a great point. You should check out this repo where someone made whisper work with a microphone input: github.com/mallorbc/whisper_mic
Hey guys please can anyone help me with this issue. I am trying to run whisper on my machine and I am getting this error in cmd. UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead").
I use a windows 10 with gpu RTX2060. Also it seems it runs on my cpu instead of NVIDIA GPU. I created a python virtual environment and pip installed whisper in that virtual environment just for more details.
Hey Dimoris, unfortunately I don't have a windows machine. It does look like you are using CPU and not GPU. Are you sure you have CUDA installed?
Do you have any advice for how to fix the 'ModuleNotFoundError: no module named 'torch._C'? I looks around the internet for answers but there's none that works, i even tried different python versions.
Looks like you need to install pytorch. You can do so by running "pip install torch" in your python environment. Good luck!
Great video 👍, just wanted to know in detail how to use this, and i now seen u r video, i 100% understanded. Btw which software or the thing..
In which you r writing the code ?
Thanks! Im using jupyterlab check my channel for my video on jupyer.
Da Vinci Resolve needs to use this to generate subtitles 👌
I use it to add subtitles to my UA-cam videos. 😎
Can you give it more than 30 seconds of audio or are you forced to break up the source file?
I believe that since the model was trained on 30 second clips, the audio must be split before processed though the pipeline. However the built in transcribe method handles that for you.
TRY!
I Just transcribed a q hour long audio file for work... worked like a charm. Took a long time though, but still less time If i transcribed by hand
Noob question, but does this work offline, or is it an API call to OpenAI?
This model is completely open sourced so you can download the model and run it offline.
Hi Rob, thank you for taking the time to share out of the wealth of your knowledge. I tried running the model, and it keeps telling me Numpy not available. I used Pip Install numpy, and I realized that numpy is available. Please, what could the problem be? Thank you. I want to use this for qualitative research. Thank you once again, and I hope to hear from you.
That’s strange, check your internet connection because that package definitely should be available. Thanks for watching!!
@@robmulla Thank you, Rob. I posted the question on the Q&A page on GitHub. The issue is my python version. I have 3.10, and Pytorch isn't compatible with any version above 3.9. So I needed to downgrade the python version to allow for Pytorch and Numpy in pytorch.
Thanks once again.
Can whisper analyze voice? Like screen and score dialect etc?
I don't believe so....
@@robmulla just figured it out :)
I'm new to AI models. I want to use Whisper to help my dad who has dysarthria. Would I be able to train Whisper to recognize speech from a dataset of dysarthria speakers? Also, would it be possible for me to then use a text-to-speech to translate the text back?
Thanks for the sharing , so is it possible to train a new language with this model
Thanks for this! I'm fairly new to NLP but already amazed by Whisper. Any idea what the *max_initial_timestamp* argument is used for in the DescribeOptions()? I'm curious to know what the smallest timestamp window is possible to achieve. Anyone know if it's possible to pull timestamps for each word's onset? I'm seeing ranges of 2-5 seconds for defaults on my samples (which are kinda verbose).
Great question. I don't know too much about the details but I did find it in the source code: github.com/openai/whisper/blob/main/whisper/decoding.py#L97
It says "the initial timestamp cannot be later than this"
Hello! What's the best way to bulk upload mp3 files and convert them to SRT files? I'm assuming whisper does not do srt and does vtt instead.
I recently used a UA-cam whisper subtitle maker on a live stream. You can watch it on my channel. It did vtt format but it think it also had an option for other formations
Great video, thanks for sharing!
Thanks for watching!
Hi Medallion! thank you very much. I was expecting that since your last one on Audio data processing in python. Is there a possibility to add a new language? I am currently working on a large audio data set in my mother tongue Fon in West Africa and would like to have some guidance. Best!
These models are trained on extremely large datasets for each language- so if you are looking to have something for a language that isn't in the existing list it would be really hard to train that yourself. Maybe reach out to OpenAI and request that language be added in future versions?
Hi Rob, thanks for sharing this video,
I am looking for a linrary/ Api that can convert speech to text from a youtube video and then I would combine the video with the translation of the text in another language.
Do you have any idea how I can do it?
Is Wissem a good library for that.
Ps: the video may last more than an hour. Thanks in advance for your help🙏🏼
I am using MacBook M1 and visual studio, I keep getting "no module named torch". Switch to Jupiter, but then get FP16 is not supported on CPU; using FP32 instead
So you got it working?
So, Is Whisper used only for Speech to Text and also only in Python?? Any JS support?
How is it with respect to data privacy?Does it store our data?
how big is that model? It has to be huge right?
It comes in various sizes, from tiny (39M) to large (1.5GB). You can find them listed in the repo here: github.com/openai/whisper#available-models-and-languages
@@robmulla thanks man that helps a lot
Can it be Run on raspberry pi5?
Hello sir, I am trying to transcribe large audio files then it takes more time, I want to transcribe in minimum time, can this possible sir?
it is working with logn files like 2 hours ?
Yes, when it predicts it splits the long audio into smaller chunks but can run on long audio files.
YO Whats up , can you then translate it to another lang like print(Result) but from Englisch to german or other language
I don't think whisper can do that type of translation out of the box. Most everything I've seen is translation to english.
@@robmulla ok thank you sir
is this also realisable in realtime?
I belive so. Check this out: huggingface.co/spaces/Amrrs/openai-whisper-live-transcribe
Thanks for making this video. I was wondering if you can say the steps to follow to execute these 4 lines of code in GPU. I installed CUDATOOLKIT, NUMBA( I good Graphic card GTX 3050) and followed some examples online, but I failed. Ty and have a great day!
Hey Hamza. It depends on the operating system you are using. Installing CUDA correctly and having it linked in your global path is usually the hardest part. For me, I followed the instructions on the nvidia website. Then I just pip installed the requirements from the whisper repo. Good luck!
Github repo?
Whisper is available here github.com/openai/whisper/
Hi buddy, can you help with a detail video on Speech to text conversion using python
can it be realtime??
best teacher ever!
Thanks for saying so Anirban!
Can this be made into .srt files?
Great question! I haven't done it myself but it looks like others have. Checkout this github discussion someone put together code that might be what you are looking for: github.com/openai/whisper/discussions/98
@@robmulla Cheers brother, I'll check that out very soon
Thanks!
thanks man
first video seen and subscribed
Love it! Thanks for subscribing.
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/__init__.py", line 20, in init
eng = _activeEngines[driverName]
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/weakref.py", line 137, in __getitem__
o = self.data[key]()
KeyError: 'sapi5'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 31, in
start(fakepyfile,mainpyfile)
File "/data/user/0/ru.iiec.pydroid3/files/accomp_files/iiec_run/iiec_run.py", line 30, in start
exec(open(mainpyfile).read(), __main__.__dict__)
File "", line 2, in
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/__init__.py", line 22, in init
eng = Engine(driverName, debug)
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/engine.py", line 30, in __init__
self.proxy = driver.DriverProxy(weakref.proxy(self), driverName, debug)
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/driver.py", line 50, in __init__
self._module = importlib.import_module(name)
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1030, in _gcd_import
File "", line 1007, in _find_and_load
File "", line 986, in _find_and_load_unlocked
File "", line 680, in _load_unlocked
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/data/user/0/ru.iiec.pydroid3/files/arm-linux-androideabi/lib/python3.9/site-packages/pyttsx3/drivers/sapi5.py", line 1, in
import comtypes.client # Importing comtypes.client will make the gen subpackage
ModuleNotFoundError: No module named 'comtypes'
[Program finished]
Plz help me with this error
Oh no. Did you figure it out? Might need to pip install that package.
@@robmulla i can't figure out the problem plz help
(I know that the program i written is absolutely fine but i can't understand what the problem is)
Sir ive a questionI want to make a program in python such that first it recognize the text from 20 images one by one and, store the last word from the image text and at the same time it should also recognize the audio from a file(which is currently running at its normal pase) through speech recognition and if it found the last from the image text in the audio at 36 seconds from the start . Then it should press a specific key on the keyboard. This thing continues utill the audio finishes.
Can this be possible by using whisper?