I've completed a course in uni covering nearly the same info, but the structure and logic of your videos make a whole understanding and I don't feel scared to answer questions. Thanks!
19:42, this is by far the BEST and easiest rundown I've heard on different types of ML. Without this, I was beginning to lose faith in my ability to figure out how to summarize it all. Ty my friend! TY TY TY!
@@ValerioVelardoTheSoundofAI ur level of explaination and ur content is absolutely amazing.......im working on music genre classification project nd u r my savior
This series is FIRE 🔥 🔥 I have been looking for a solid course that connects the dots between Audio Signal Processing & Machine Learning/AI. This is it! THANKS 💪🙏🙏
Amazing content man! Much appreciated! You explained concepts in a logical fashion that made it easy to learn and understand the concepts. Subscribed and looking forward to any future videos!
this series is really well explain about the types of audio features and even giving the reference to it. i hope you will start another exciting series soon! :)
At 14:11: The fourier transform shows a amplitude at around 128 Hz but it does not show up in the spectrogram. Why is this when others are visible? The highest amplitude is said to be 256 in the video which does appear.
@14:30 Shouldnt the y-axis be time axis and x axis be frequency? and the brightness indicating the amplitude? Because if what you said is true, then that would mean if we move through time axis, then at one particular instance I would have multiple frequency which is certainly not possible. It would be great if you could clarify this doubt.
Thanks Valerio, this is brilliant. Could I actually create my own playlists and have a model identify a pattern in that playlist as if it was classifying music by genre, but instead let's say it's classifying playlist1, playlist2, playlistt3, etc? So then it would add songs automatically to each playlist.....How could I approach this?
Dear Valerio, thanks for the good material presented in your channel. I would like to find occurrences of a pattern (0,5 seconds) in a long audio (e.g. 15 minutes). In your channel you provide a source code to extract audio features, using for example Mel-frequency cepstral coefficients (MFCC). So, I computed MFCC from my pattern, and from small parts of my long audio, comparing both using DTW. Do you think this is a good method to find the pattern occurrences? Thanks in advance. Regards
Thanks for the extremely informative course. What would be the audio features that when extracted would help in comparing two recitations. This is my use case. In India there is a tradition of memorizing texts. A teacher recites them and students recite back 2-3 times. The teacher corrects the pronunciation if there are any mistakes. And the students memorize as the process gets repeated. I am involved in developing an app that could potentially replace the teacher. For that the app would play a record of the teacher's recital and wait for the learner to recite. The app would then compare the recital of the learner with the recorded recital. What would be the audio features that would help in identifying matches and mismatches.
Hello Valerio, As always impressive work and effort! Not sure about your schedule but it will be very interesting (may be in the future) to see your approach in removing the noise from signal (using deep learning). Normally such approach (removing noise) should work RT, however I am not convinced about if it is feasible to run such application int RT. Thanks and have a good day.
STFT is a particular transformation that enables us to calculate a spectrogram. You can think of it as a series of spectra calculated sequentially on a small subset of the audio signal. I'll cover this in detail moving forward!
Very good stuff, thanks for making this. Question, how practical would it be to use the entire list of Amp Env, Root-mean Square, Zero crossing,... to initial train a smallish model and slowly reduce the list to get to the list that works best with the data we are looking at? Rather than attempting to guess our way through the data since our instincts of sound are way different than an ML model.
I would suggest doing a pre-processing analysis, investigating correlation between the different audio features and data samples. This way you can have at a glance an idea of which features are the most promising.
Hello sir. I am doing the research study through machine learning. But can you please clarify why machine learning is still in use if in deep learning, we don't need to define features as you described in video.
It depends on the use case. Sometimes DL applications are impractical. DL takes a lot of computational resources. Traditional ML techniques require less resources and less specialised talent. If you have a small dataset, again it doesn't make much sense to go with a DL approach.
@@ValerioVelardoTheSoundofAI okay. thanks a lot for your response and time. The series is really great and being helpful in my study. Much appreciated.
Hello sir, would it be possible for you to tell me what features are necessary for specific voice recognition part ? it would consist classifying age, gender and also individuals. I am doing my ML project on this and I am very very confused. It would be a great help if you tell me. Thank you.
Sir , I am doing the project of higher studies in Alzheimer detection. So I do need datasets of speech of patients. There are some organizations which provide datasets like dementia bank. But they require authentication procedure. Can you please help me find datasets...even if for small size.
I'm sorry Ekta, but I'm not familiar with these types of datasets. Have you tried Kaggle? Other option, you could ask this question in The Sound of AI Slack group. Somebody there may know about this... PS: Please call me Valerio ;)
@@ValerioVelardoTheSoundofAI Kaggle have many datasets easily available but sadly it doesn't have the speech dataset of Alzheimer's. I will ask on slack group surely. Thank you.
@@ValerioVelardoTheSoundofAI I have found librosa.org/doc/0.7.1/generated/librosa.output.write_wav.html in which they have eliminated librosa.output.write_wav function in version 8 i have attached the file in slack for your further consideration and would be appreciated if could give me some hints.
Suppose I am given a task regarding voice identification. There is a database that contains audio files (voices) of all my customers. When a person calls my company for any reason, I must authenticate that this person, based on the audio files (voices), is the same person calling. If anyone could direct me to solve this problem I’d really appreciate it.
I've completed a course in uni covering nearly the same info, but the structure and logic of your videos make a whole understanding and I don't feel scared to answer questions.
Thanks!
19:42, this is by far the BEST and easiest rundown I've heard on different types of ML. Without this, I was beginning to lose faith in my ability to figure out how to summarize it all. Ty my friend! TY TY TY!
The series is a such a gem! Very well articulated and is really helping me with my literature review. Thanks so much! Keep posting.
I am able to understand the principles in a way I've never before. Thank you!!
Glad I could help!
@@ValerioVelardoTheSoundofAI ur level of explaination and ur content is absolutely amazing.......im working on music genre classification project nd u r my savior
@@pk-bb8cq thank you :)
The only resource on UA-cam. Great Work! Keep making videos.
Thank you!
This series is FIRE 🔥 🔥 I have been looking for a solid course that connects the dots between Audio Signal Processing & Machine Learning/AI. This is it! THANKS 💪🙏🙏
Thank you very much for these amazing series. I'm going to check all the content on the channel. ✌
Thank you!
This is a godsend, I’m using this to learn background for a senior project.
Deeply thanks! Better than any of the other series courses offered on Coursera.
Amazing content man! Much appreciated! You explained concepts in a logical fashion that made it easy to learn and understand the concepts. Subscribed and looking forward to any future videos!
this series is really well explain about the types of audio features and even giving the reference to it. i hope you will start another exciting series soon! :)
I will! Stay tuned :)
So interesting ,i'm doing a Robotic Inteligence degree and i have never programed audio but you explain so well and you make me try it.
Amazing!
Informative and impressive video, thanks a lot Valerio
Thanks!
Highly comprehensive and informative series!
I had a question : How can we directly input Low Level Descriptors to Keras models?
At 14:11: The fourier transform shows a amplitude at around 128 Hz but it does not show up in the spectrogram. Why is this when others are visible? The highest amplitude is said to be 256 in the video which does appear.
Hey brother, your videos are so amazing, thank you a lot🙌🙌
"Nobody uses MFCCs for machine learning anymore"... Enter HuBERT!!!! All seriousness, your videos are amazing and perfect, never stop!
Thanks!
Excellent Video. Liked all the your channel videos
Sooo happy to find this series
Your way of explaining is amazing !!
Thank you!
you are amazing. the best tutorial ever made. thanks alot
Thank you!
Thank you for the super kind lecture!
@14:30 Shouldnt the y-axis be time axis and x axis be frequency? and the brightness indicating the amplitude? Because if what you said is true, then that would mean if we move through time axis, then at one particular instance I would have multiple frequency which is certainly not possible. It would be great if you could clarify this doubt.
Appreciate this content very much! Thanks!
Thanks Valerio, this is brilliant. Could I actually create my own playlists and have a model identify a pattern in that playlist as if it was classifying music by genre, but instead let's say it's classifying playlist1, playlist2, playlistt3, etc? So then it would add songs automatically to each playlist.....How could I approach this?
Great video as usual - will u be covering compression techniques before feeding to dense NN's
? any good books i can refer to along with video
I still haven't decided. As for a reference book, I suggest you check out Fundamentals of Music Processing.
Hi will you cover the topic of wavelet transform or Hilbert transform? Very good content btw.
I haven't planned to cover wavelet transform soon. However, I'll cover it in the future. Stay tuned :)
Dear Valerio, thanks for the good material presented in your channel.
I would like to find occurrences of a pattern (0,5 seconds) in a long audio (e.g. 15 minutes).
In your channel you provide a source code to extract audio features, using for example Mel-frequency cepstral coefficients (MFCC).
So, I computed MFCC from my pattern, and from small parts of my long audio, comparing both using DTW.
Do you think this is a good method to find the pattern occurrences?
Thanks in advance.
Regards
Thanks for the extremely informative course. What would be the audio features that when extracted would help in comparing two recitations.
This is my use case. In India there is a tradition of memorizing texts. A teacher recites them and students recite back 2-3 times. The teacher corrects the pronunciation if there are any mistakes. And the students memorize as the process gets repeated. I am involved in developing an app that could potentially replace the teacher. For that the app would play a record of the teacher's recital and wait for the learner to recite. The app would then compare the recital of the learner with the recorded recital. What would be the audio features that would help in identifying matches and mismatches.
Difficult to say without trying out. But often Mel Spectrograms tend to work best ;)
Great VDO
Thanks so much for sharing.
I wonder why some wav file have blue background and some have black when I transformed to spectrogram?
am learning this for our project Speech Emotion Recognition so any help and guidance would be greatly appreciated
Hi man thank you you should do a course on audio classification it would be awesome
Thank you! I have a series called "DL for Audio with Python", where I tackle an audio/music classification problem.
well organized
Great video 🙏
What is the source of Deep Learnings tuning reference?
Hello Valerio, As always impressive work and effort! Not sure about your schedule but it will be very interesting (may be in the future) to see your approach in removing the noise from signal (using deep learning). Normally such approach (removing noise) should work RT, however I am not convinced about if it is feasible to run such application int RT. Thanks and have a good day.
Thank you! Denoising is definitely an application that I'd like to cover in the future. Not sure about having it RT though...
If we are using traditional ml and solving a problem of classifying emotions based on audio which are the 3 features to be considered?
Hi, thanks a lot for your great video. Is mfcc in the same category with spectrogram? I mean is it a handcrafted feature?
I'd say so. However, MFCC is more "handcrafted" than a spectrogram, in that there are more manipulations of the original signal.
Hi Valerio, Thanks for your knowledge sharing. Why it is STFT(Short-time Fourier transform) ? Thanks
STFT is a particular transformation that enables us to calculate a spectrogram. You can think of it as a series of spectra calculated sequentially on a small subset of the audio signal. I'll cover this in detail moving forward!
@@ValerioVelardoTheSoundofAI Thanks for your response and the series
sir please bring series for wifi signal csi data
Valerio, could you please provide bibliographic references for the taxonomy presented in this video?
Thanks you,
Can We detect the Locust voice also from Voice recognition model and if yes how can we?
Thanks a lot!!!
Great video. Excellent explanation. Can you make one video about some projects which we can make.
That's a nice idea! I'll add this topic to the backlog of my "Tips and Tricks" videos.
@@ValerioVelardoTheSoundofAI hope to see the video very soon.
Very good stuff, thanks for making this. Question, how practical would it be to use the entire list of Amp Env, Root-mean Square, Zero crossing,... to initial train a smallish model and slowly reduce the list to get to the list that works best with the data we are looking at? Rather than attempting to guess our way through the data since our instincts of sound are way different than an ML model.
I would suggest doing a pre-processing analysis, investigating correlation between the different audio features and data samples. This way you can have at a glance an idea of which features are the most promising.
Valerio Velardo - The Sound of AI thank you! Pre-processing analysis is in a lesson coming up. Good to know.
perfect 👌 👌
A super thanks to you,🙏
Hello sir. I am doing the research study through machine learning. But can you please clarify why machine learning is still in use if in deep learning, we don't need to define features as you described in video.
It depends on the use case. Sometimes DL applications are impractical. DL takes a lot of computational resources. Traditional ML techniques require less resources and less specialised talent. If you have a small dataset, again it doesn't make much sense to go with a DL approach.
@@ValerioVelardoTheSoundofAI okay. thanks a lot for your response and time. The series is really great and being helpful in my study. Much appreciated.
Sir,Which audio feature extraction process are trending to feed in into an RNN model or CNN_lstm model
In DNN, we generally use (Mel) Spectrograms. A lot more on this in coming videos!
thanks for helping us by clearing concepts of sound processing
Oro Puro!
Hello sir, would it be possible for you to tell me what features are necessary for specific voice recognition part ? it would consist classifying age, gender and also individuals. I am doing my ML project on this and I am very very confused. It would be a great help if you tell me. Thank you.
You could use both MFCCs or (Mel) Spectrograms for those tasks.
@@ValerioVelardoTheSoundofAI thank you so much. really appreciate it.
@@ValerioVelardoTheSoundofAI also do you have any audio dataset cleaning video ?
Great content! Hope to collaborate with you at some point on a project.
Sir , I am doing the project of higher studies in Alzheimer detection. So I do need datasets of speech of patients. There are some organizations which provide datasets like dementia bank. But they require authentication procedure. Can you please help me find datasets...even if for small size.
I'm sorry Ekta, but I'm not familiar with these types of datasets. Have you tried Kaggle? Other option, you could ask this question in The Sound of AI Slack group. Somebody there may know about this...
PS: Please call me Valerio ;)
@@ValerioVelardoTheSoundofAI Kaggle have many datasets easily available but sadly it doesn't have the speech dataset of Alzheimer's. I will ask on slack group surely. Thank you.
I have a question:
how can we convert a time-dependent signal to an audio file in python?
You can use librosa.
@@ValerioVelardoTheSoundofAI I have found librosa.org/doc/0.7.1/generated/librosa.output.write_wav.html
in which they have eliminated librosa.output.write_wav function in version 8
i have attached the file in slack for your further consideration and would be appreciated if could give me some hints.
Can u do a project regarding deepfake audio detection
Suppose I am given a task regarding voice identification. There is a database that contains audio files (voices) of all my customers. When a person calls my company for any reason, I must authenticate that this person, based on the audio files (voices), is the same person calling. If anyone could direct me to solve this problem I’d really appreciate it.
Search for "speaker verification".
the logo of channel tells that this guy is huge fan of Neural style transfer learning 😂🤣
Indeed I am!
every time speaker says "focus" I misunderstand it as a ''f-word'...
500th like❤