Types of Audio Features for Machine Learning

Поділитися
Вставка
  • Опубліковано 4 лип 2024
  • Learn how to distinguish among different types of audio features, which are instrumental to build intelligent audio applications. I introduce time domain, frequency domain, and time-frequency domain features. I explain how we can categorise audio features based on their level of abstraction, ML approach adopted, and temporal scope.
    This video is part of the Audio Processing for Machine Learning series. This course aims to teach you how to process audio data 🎧 and extract relevant audio features for your machine learning applications 🤖🤖.
    Slides:
    github.com/musikalkemist/Audi...
    Join The Sound Of AI Slack community:
    valeriovelardo.com/the-sound-...
    Interested in hiring me as a consultant/freelancer?
    valeriovelardo.com/
    Follow Valerio on Facebook:
    / thesoundofai
    Connect with Valerio on Linkedin:
    / valeriovelardo
    Follow Valerio on Twitter:
    / musikalkemist
  • Наука та технологія

КОМЕНТАРІ • 94

  • @MilanaShkhanukova
    @MilanaShkhanukova 2 роки тому +6

    I've completed a course in uni covering nearly the same info, but the structure and logic of your videos make a whole understanding and I don't feel scared to answer questions.
    Thanks!

  • @cs306labevaluation3
    @cs306labevaluation3 3 роки тому +24

    The series is a such a gem! Very well articulated and is really helping me with my literature review. Thanks so much! Keep posting.

  • @drewpriebe5372
    @drewpriebe5372 2 роки тому +2

    Amazing content man! Much appreciated! You explained concepts in a logical fashion that made it easy to learn and understand the concepts. Subscribed and looking forward to any future videos!

  • @Drew_7
    @Drew_7 Рік тому

    19:42, this is by far the BEST and easiest rundown I've heard on different types of ML. Without this, I was beginning to lose faith in my ability to figure out how to summarize it all. Ty my friend! TY TY TY!

  • @thatchessguy7072
    @thatchessguy7072 2 роки тому

    This is a godsend, I’m using this to learn background for a senior project.

  • @xyc6090
    @xyc6090 2 роки тому +1

    Deeply thanks! Better than any of the other series courses offered on Coursera.

  • @zamundacrypto
    @zamundacrypto 5 місяців тому

    This series is FIRE 🔥 🔥 I have been looking for a solid course that connects the dots between Audio Signal Processing & Machine Learning/AI. This is it! THANKS 💪🙏🙏

  • @letsplaionline
    @letsplaionline 3 роки тому +4

    Thank you very much for these amazing series. I'm going to check all the content on the channel. ✌

  • @tentyluaysari3393
    @tentyluaysari3393 2 роки тому

    this series is really well explain about the types of audio features and even giving the reference to it. i hope you will start another exciting series soon! :)

  • @suyashramteke3588
    @suyashramteke3588 4 роки тому +13

    I am able to understand the principles in a way I've never before. Thank you!!

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  4 роки тому +2

      Glad I could help!

    • @pk-bb8cq
      @pk-bb8cq 3 роки тому

      @@ValerioVelardoTheSoundofAI ur level of explaination and ur content is absolutely amazing.......im working on music genre classification project nd u r my savior

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому +1

      @@pk-bb8cq thank you :)

  • @ridamsrivastava9502
    @ridamsrivastava9502 3 роки тому +4

    Highly comprehensive and informative series!
    I had a question : How can we directly input Low Level Descriptors to Keras models?

  • @anshkadiyan5170
    @anshkadiyan5170 3 роки тому

    Sooo happy to find this series

  • @abhishek-shrm
    @abhishek-shrm 3 роки тому +3

    The only resource on UA-cam. Great Work! Keep making videos.

  • @tkorting
    @tkorting 3 роки тому +1

    Dear Valerio, thanks for the good material presented in your channel.
    I would like to find occurrences of a pattern (0,5 seconds) in a long audio (e.g. 15 minutes).
    In your channel you provide a source code to extract audio features, using for example Mel-frequency cepstral coefficients (MFCC).
    So, I computed MFCC from my pattern, and from small parts of my long audio, comparing both using DTW.
    Do you think this is a good method to find the pattern occurrences?
    Thanks in advance.
    Regards

  • @rxz8862
    @rxz8862 3 роки тому +1

    Hey brother, your videos are so amazing, thank you a lot🙌🙌

  • @vivekmankar9643
    @vivekmankar9643 3 роки тому

    Your way of explaining is amazing !!

  • @rrrjo4137
    @rrrjo4137 2 роки тому

    Thank you for the super kind lecture!

  • @albertgabrielmatei138
    @albertgabrielmatei138 4 місяці тому

    So interesting ,i'm doing a Robotic Inteligence degree and i have never programed audio but you explain so well and you make me try it.

  • @zeldisuryady1541
    @zeldisuryady1541 4 роки тому +3

    Informative and impressive video, thanks a lot Valerio

  • @raffaelrameh14
    @raffaelrameh14 9 місяців тому

    Appreciate this content very much! Thanks!

  • @santhosh20071993
    @santhosh20071993 2 роки тому

    Excellent Video. Liked all the your channel videos

  • @shafagh_projects
    @shafagh_projects 3 роки тому

    you are amazing. the best tutorial ever made. thanks alot

  • @rahuldeora5815
    @rahuldeora5815 3 роки тому +1

    At 14:11: The fourier transform shows a amplitude at around 128 Hz but it does not show up in the spectrogram. Why is this when others are visible? The highest amplitude is said to be 256 in the video which does appear.

  • @sophalchan775
    @sophalchan775 2 роки тому

    Great VDO
    Thanks so much for sharing.
    I wonder why some wav file have blue background and some have black when I transformed to spectrogram?

  • @dudusash
    @dudusash 4 роки тому +3

    Great video as usual - will u be covering compression techniques before feeding to dense NN's
    ? any good books i can refer to along with video

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  4 роки тому

      I still haven't decided. As for a reference book, I suggest you check out Fundamentals of Music Processing.

  • @subramanyabhattm4626
    @subramanyabhattm4626 3 роки тому

    If we are using traditional ml and solving a problem of classifying emotions based on audio which are the 3 features to be considered?

  • @markusbuchholz3518
    @markusbuchholz3518 4 роки тому

    Hello Valerio, As always impressive work and effort! Not sure about your schedule but it will be very interesting (may be in the future) to see your approach in removing the noise from signal (using deep learning). Normally such approach (removing noise) should work RT, however I am not convinced about if it is feasible to run such application int RT. Thanks and have a good day.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  4 роки тому +1

      Thank you! Denoising is definitely an application that I'd like to cover in the future. Not sure about having it RT though...

  • @gabriellara7456
    @gabriellara7456 2 роки тому

    Valerio, could you please provide bibliographic references for the taxonomy presented in this video?

  • @akshaykumar-yx9ic
    @akshaykumar-yx9ic Рік тому

    Great video 🙏

  • @manojrana009
    @manojrana009 3 роки тому

    A super thanks to you,🙏

  • @juniorsilva5713
    @juniorsilva5713 3 місяці тому

    Thanks a lot!!!

  • @dapr98
    @dapr98 7 місяців тому

    Thanks Valerio, this is brilliant. Could I actually create my own playlists and have a model identify a pattern in that playlist as if it was classifying music by genre, but instead let's say it's classifying playlist1, playlist2, playlistt3, etc? So then it would add songs automatically to each playlist.....How could I approach this?

  • @6tyelement979
    @6tyelement979 3 роки тому

    Hi man thank you you should do a course on audio classification it would be awesome

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      Thank you! I have a series called "DL for Audio with Python", where I tackle an audio/music classification problem.

  • @wesleymorris1573
    @wesleymorris1573 Рік тому

    "Nobody uses MFCCs for machine learning anymore"... Enter HuBERT!!!! All seriousness, your videos are amazing and perfect, never stop!

  • @shubham6867
    @shubham6867 3 роки тому

    Can We detect the Locust voice also from Voice recognition model and if yes how can we?

  • @vidyagopal3431
    @vidyagopal3431 2 роки тому

    well organized

  • @lakshaykhanna9811
    @lakshaykhanna9811 2 роки тому

    @14:30 Shouldnt the y-axis be time axis and x axis be frequency? and the brightness indicating the amplitude? Because if what you said is true, then that would mean if we move through time axis, then at one particular instance I would have multiple frequency which is certainly not possible. It would be great if you could clarify this doubt.

  • @wixor_69
    @wixor_69 3 роки тому +3

    Hi will you cover the topic of wavelet transform or Hilbert transform? Very good content btw.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому +1

      I haven't planned to cover wavelet transform soon. However, I'll cover it in the future. Stay tuned :)

  • @tyhuffman5447
    @tyhuffman5447 3 роки тому

    Very good stuff, thanks for making this. Question, how practical would it be to use the entire list of Amp Env, Root-mean Square, Zero crossing,... to initial train a smallish model and slowly reduce the list to get to the list that works best with the data we are looking at? Rather than attempting to guess our way through the data since our instincts of sound are way different than an ML model.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      I would suggest doing a pre-processing analysis, investigating correlation between the different audio features and data samples. This way you can have at a glance an idea of which features are the most promising.

    • @tyhuffman5447
      @tyhuffman5447 3 роки тому

      Valerio Velardo - The Sound of AI thank you! Pre-processing analysis is in a lesson coming up. Good to know.

  • @parisaahmadzadeh6866
    @parisaahmadzadeh6866 6 місяців тому

    Hi, thanks a lot for your great video. Is mfcc in the same category with spectrogram? I mean is it a handcrafted feature?

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  6 місяців тому

      I'd say so. However, MFCC is more "handcrafted" than a spectrogram, in that there are more manipulations of the original signal.

  • @marioandresheviacavieres1923
    @marioandresheviacavieres1923 5 місяців тому

    Oro Puro!

  • @venkatesanr9455
    @venkatesanr9455 4 роки тому +1

    Hi Valerio, Thanks for your knowledge sharing. Why it is STFT(Short-time Fourier transform) ? Thanks

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  4 роки тому +1

      STFT is a particular transformation that enables us to calculate a spectrogram. You can think of it as a series of spectra calculated sequentially on a small subset of the audio signal. I'll cover this in detail moving forward!

    • @venkatesanr9455
      @venkatesanr9455 4 роки тому +1

      @@ValerioVelardoTheSoundofAI Thanks for your response and the series

  • @efeozkaya1372
    @efeozkaya1372 3 роки тому

    Great content! Hope to collaborate with you at some point on a project.

  • @subhamkundu5043
    @subhamkundu5043 3 роки тому

    Great video. Excellent explanation. Can you make one video about some projects which we can make.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому +1

      That's a nice idea! I'll add this topic to the backlog of my "Tips and Tricks" videos.

    • @subhamkundu5043
      @subhamkundu5043 3 роки тому

      @@ValerioVelardoTheSoundofAI hope to see the video very soon.

  • @AI_Elbaz
    @AI_Elbaz 3 роки тому

    Great video.

  • @laidbackmedia
    @laidbackmedia 8 місяців тому

    What is the source of Deep Learnings tuning reference?

  • @ragavans85
    @ragavans85 Рік тому

    Thanks for the extremely informative course. What would be the audio features that when extracted would help in comparing two recitations.
    This is my use case. In India there is a tradition of memorizing texts. A teacher recites them and students recite back 2-3 times. The teacher corrects the pronunciation if there are any mistakes. And the students memorize as the process gets repeated. I am involved in developing an app that could potentially replace the teacher. For that the app would play a record of the teacher's recital and wait for the learner to recite. The app would then compare the recital of the learner with the recorded recital. What would be the audio features that would help in identifying matches and mismatches.

  • @kaziasifahmed2443
    @kaziasifahmed2443 3 роки тому

    Sir,Which audio feature extraction process are trending to feed in into an RNN model or CNN_lstm model

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому +1

      In DNN, we generally use (Mel) Spectrograms. A lot more on this in coming videos!

    • @kaziasifahmed2443
      @kaziasifahmed2443 3 роки тому

      thanks for helping us by clearing concepts of sound processing

  • @priataoshru3900
    @priataoshru3900 3 роки тому

    Hello sir, would it be possible for you to tell me what features are necessary for specific voice recognition part ? it would consist classifying age, gender and also individuals. I am doing my ML project on this and I am very very confused. It would be a great help if you tell me. Thank you.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      You could use both MFCCs or (Mel) Spectrograms for those tasks.

    • @priataoshru3900
      @priataoshru3900 3 роки тому

      @@ValerioVelardoTheSoundofAI thank you so much. really appreciate it.

    • @priataoshru3900
      @priataoshru3900 3 роки тому

      @@ValerioVelardoTheSoundofAI also do you have any audio dataset cleaning video ?

  • @ektabajaj1683
    @ektabajaj1683 3 роки тому

    Hello sir. I am doing the research study through machine learning. But can you please clarify why machine learning is still in use if in deep learning, we don't need to define features as you described in video.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      It depends on the use case. Sometimes DL applications are impractical. DL takes a lot of computational resources. Traditional ML techniques require less resources and less specialised talent. If you have a small dataset, again it doesn't make much sense to go with a DL approach.

    • @ektabajaj1683
      @ektabajaj1683 3 роки тому

      @@ValerioVelardoTheSoundofAI okay. thanks a lot for your response and time. The series is really great and being helpful in my study. Much appreciated.

  • @shafagh_projects
    @shafagh_projects 3 роки тому

    I have a question:
    how can we convert a time-dependent signal to an audio file in python?

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      You can use librosa.

    • @shafagh_projects
      @shafagh_projects 3 роки тому

      @@ValerioVelardoTheSoundofAI I have found librosa.org/doc/0.7.1/generated/librosa.output.write_wav.html
      in which they have eliminated librosa.output.write_wav function in version 8
      i have attached the file in slack for your further consideration and would be appreciated if could give me some hints.

  • @ektabajaj1683
    @ektabajaj1683 3 роки тому

    Sir , I am doing the project of higher studies in Alzheimer detection. So I do need datasets of speech of patients. There are some organizations which provide datasets like dementia bank. But they require authentication procedure. Can you please help me find datasets...even if for small size.

    • @ValerioVelardoTheSoundofAI
      @ValerioVelardoTheSoundofAI  3 роки тому

      I'm sorry Ekta, but I'm not familiar with these types of datasets. Have you tried Kaggle? Other option, you could ask this question in The Sound of AI Slack group. Somebody there may know about this...
      PS: Please call me Valerio ;)

    • @ektabajaj1683
      @ektabajaj1683 3 роки тому +2

      @@ValerioVelardoTheSoundofAI Kaggle have many datasets easily available but sadly it doesn't have the speech dataset of Alzheimer's. I will ask on slack group surely. Thank you.

  • @niyanderniyago7577
    @niyanderniyago7577 3 роки тому +1

    500th like❤

  • @sarabhian2270
    @sarabhian2270 2 роки тому

    the logo of channel tells that this guy is huge fan of Neural style transfer learning 😂🤣

  • @sushruthbhat5727
    @sushruthbhat5727 3 роки тому

    Suppose I am given a task regarding voice identification. There is a database that contains audio files (voices) of all my customers. When a person calls my company for any reason, I must authenticate that this person, based on the audio files (voices), is the same person calling. If anyone could direct me to solve this problem I’d really appreciate it.

  • @wudaqin4310
    @wudaqin4310 3 роки тому +1

    every time speaker says "focus" I misunderstand it as a ''f-word'...

  • @manojnoochila
    @manojnoochila 3 місяці тому

    Can u do a project regarding deepfake audio detection