this is a really helpful video for someone who just starts trying to do signal processing and classifying, thank you for your effort and it really helps me understand better on spectrogram and signal processing!
Hello, we are working on an assignment related to gender recognition from voice. However, we want to extract values such as "mean frequency, standaty deviation ,spectral flattnes" from a person's voice using the data you use. How can we achieve this?
Hi - The code can be run either in a jupyter notebook or Google Colab. Colab might be easier if you don't already have python/jupyter notebook installed. To run the code in the Colab, just download the code file from my github and drop it in your google drive. Then adjust the file paths, etc. I hope this helps!
Hello Prabhjot, this is indeed an amazing work. Thank you for taking your time to share knowledge to the world. Could you please guide me on how to save batches of spectrograms? I have created TensorFlow dataset of audio files and pass them through a data pipeline inline with the kind of decoding in accordance to my work. I want to plot and save each spectrogram from the dataset generated. Thank you in anticipation of your kind response. Cheers!
Hi Prabhjot Gosal, thank you for your hot video which turned out to be very interesting! a practical case: if I have to change the bpm of a song to make them constant for its entire duration (avoid drifting tempo) How tight is my library?
Hi, Good Day. Can we use this audio feature extraction to compare two voice of a same speaker in terms of authentication? Will saving the log mel output as logMel.out and compare the same speaker voice in different time as logMel2.out and compare both these output to authenticate ? Is that possible and result in a good way for this use case? Regards, Simhan
Hi, It is certainly possible to compare the two log mels and they may yield what you are looking for. However, MFCC features are better features for speech in general, compared to the log mel spectrograms. Thankks!
Hi Very interesting I have a query How can we find audio abnormalities like Missing samples specific duration And glitches in between audio file Could help me Thank you adavance
Hi, that is an interesting question... If by missing samples, if you mean the audio is silent for a specific duration, then that is easy to detect. We can simply check for where the amplitude is 0. For the glitches, it will be a little trickier in my opinion as we would need some way of telling whether a spike/abnormality in the audio is part of the audio or is it truly a glitch. If you have a footprint of what a glitch looks like, then you could check if your audio has a similar footprint.
Hi Prabhjot, I want to compare audio of person A with audio of person B and get a match percentage. Can you guide how to achieve this? Just pointing me in the right direction will be great help.
@@prabhjotgosal2489 I went through this, sorry my question was wrong. I don't want to verify the speaker, I want keep person A as reference and then match pronounciation of Person B with A. Then get a percentage match score.
@@juicetin942 Very interesting problem! I can think of few things which may or may not work depending on the dataset. I would start with the basics: 1. Use cross correlation to compare two audios (en.wikipedia.org/wiki/Cross-correlation). 2. Compare the spectrograms of the two audios (first convert each of the audio waveform to their respective spectrogram and then find the difference between two spectrograms). The difference spectrogram will indicate how different the two audios are. 3. Formulate this as a template matching problem. The audio (waveform or its spectrogram) of person A can be considered as a template. Then, the goal is to find how much the audio of person B matches with this template. Look for research papers in CV space for template matching. Please note this problem becomes harder if the two audios are drastically different because then we need to deal with many more variables besides the pronunciation differences. Such as different recording environments (which means we have to deal with the different background noise), different audio lengths (if the content in the audios is the same but it is said at different timestamps in the audios), different speaker voices, etc.
this is a really helpful video for someone who just starts trying to do signal processing and classifying, thank you for your effort and it really helps me understand better on spectrogram and signal processing!
Thank you very much for clearly and visualize explanation
Thank you for the video. Very excited for these video series. One of your videos, yolo to coco conversation was very helpful.
I am glad you find the content helpful. Thanks for the feedback!
if you want to increase the resolution on the x axis you can increase the sr. But how do you increase the resolution of the frequency on the y axis?
I want to analyze a frequency signal with a fairly large bandwidth. Will this method suit my task?
this was really helpful! thank you very very much!!
Hello, we are working on an assignment related to gender recognition from voice. However, we want to extract values such as "mean frequency, standaty deviation ,spectral flattnes" from a person's voice using the data you use. How can we achieve this?
hi. can I have the slide presentation?really nice presentation
in 5:47 you mention spectral leakage, what is exactly?
I think that's the signal that is 'lost' when converting from analog to digital.
hello, its really helpful but can you please tell me how should i run the code and where? (ik silly question but im new to it)
Hi - The code can be run either in a jupyter notebook or Google Colab. Colab might be easier if you don't already have python/jupyter notebook installed. To run the code in the Colab, just download the code file from my github and drop it in your google drive. Then adjust the file paths, etc. I hope this helps!
Thank you for sharing your effort.
Hello Prabhjot, this is indeed an amazing work. Thank you for taking your time to share knowledge to the world. Could you please guide me on how to save batches of spectrograms? I have created TensorFlow dataset of audio files and pass them through a data pipeline inline with the kind of decoding in accordance to my work. I want to plot and save each spectrogram from the dataset generated. Thank you in anticipation of your kind response. Cheers!
is it possible to share a link to the 'h_1.wav' file used in your youtube demo please 🙂
I have uploaded the sample file on my github page: github.com/PrabhjotKaurGosal/Helpful-scripts-for-MachineLearning
Hi Prabhjot Gosal, thank you for your hot video which turned out to be very interesting!
a practical case: if I have to change the bpm of a song to make them constant for its entire duration (avoid drifting tempo) How tight is my library?
Hi,
Thanks for the comment! I am not well versed with processing music audios and cannot help at this time unfortunately.
Just shrink the song. A 100 bpm shrink by 25% will result in 75 bpm
Hi Prabhjot , can you make a video about LPC algorithm in Feature Extraction please?
Hello, I sure will. It might take some time though as I wrap up the direct speech to speech translation project.
Hi,
Good Day. Can we use this audio feature extraction to compare two voice of a same speaker in terms of authentication? Will saving the log mel output as logMel.out and compare the same speaker voice in different time as logMel2.out and compare both these output to authenticate ? Is that possible and result in a good way for this use case?
Regards,
Simhan
Hi,
It is certainly possible to compare the two log mels and they may yield what you are looking for. However, MFCC features are better features for speech in general, compared to the log mel spectrograms. Thankks!
@@prabhjotgosal2489 Thanks for your response. Do you have any references for MFCC feature extraction ? It would be of great help for my research.
@@simhan2895 , this video maybe helpful: ua-cam.com/video/WJI-17MNpdE/v-deo.html
Hi
Very interesting
I have a query
How can we find audio abnormalities like
Missing samples specific duration
And glitches in between audio file
Could help me
Thank you adavance
Hi, that is an interesting question... If by missing samples, if you mean the audio is silent for a specific duration, then that is easy to detect. We can simply check for where the amplitude is 0. For the glitches, it will be a little trickier in my opinion as we would need some way of telling whether a spike/abnormality in the audio is part of the audio or is it truly a glitch. If you have a footprint of what a glitch looks like, then you could check if your audio has a similar footprint.
@@prabhjotgosal2489 Hi thanks for reply,
Glitches like inbetween spikes or suddenly reduce time interval
Thanks for the video!
Hi Prabhjot, I want to compare audio of person A with audio of person B and get a match percentage. Can you guide how to achieve this?
Just pointing me in the right direction will be great help.
Hi, you may want to check out pypi.org/project/speaker-verification/
I have not tested it myself but this could be a good start.
@@prabhjotgosal2489 I went through this, sorry my question was wrong.
I don't want to verify the speaker, I want keep person A as reference and then match pronounciation of Person B with A.
Then get a percentage match score.
@@juicetin942 Very interesting problem! I can think of few things which may or may not work depending on the dataset. I would start with the basics:
1. Use cross correlation to compare two audios (en.wikipedia.org/wiki/Cross-correlation).
2. Compare the spectrograms of the two audios (first convert each of the audio waveform to their respective spectrogram and then find the difference between two spectrograms). The difference spectrogram will indicate how different the two audios are.
3. Formulate this as a template matching problem. The audio (waveform or its spectrogram) of person A can be considered as a template. Then, the goal is to find how much the audio of person B matches with this template. Look for research papers in CV space for template matching.
Please note this problem becomes harder if the two audios are drastically different because then we need to deal with many more variables besides the pronunciation differences. Such as different recording environments (which means we have to deal with the different background noise), different audio lengths (if the content in the audios is the same but it is said at different timestamps in the audios), different speaker voices, etc.
@@prabhjotgosal2489 Thank you
Excellent Keep it up beta
Thanks a lot
What a waste! You showed literally nothing! Well, you showed you have no clue what you’re doing. Ugh.
Thanks a lot for this!