КОМЕНТАРІ •

  • @itsrainingafterall
    @itsrainingafterall Рік тому +9

    thank u jeff u just saved me and my math grade 👍

  • @SS-rv6ke
    @SS-rv6ke 19 днів тому

    Thanks, Jeff, I find your video really helpful as a hobbyist programmer and musician.
    Could you please do a video about automatic guitar transcription - For Example : show the notes played on the guitar in real time on the virtual guitar fretboard and might as well save it as a video/gif file for later viewing?
    I am a guitarist and I have been manually writing the tabs - having the transcription done automatically with the use of Python will be a great time saver :)
    Thanks for your really informative and interesting videos
    Greetings

  • @mylesmontclair
    @mylesmontclair 7 місяців тому +2

    This was awesome! Thanks!

  • @warssup
    @warssup Рік тому +3

    Can we just ignore the point, that I just submitted a publication to a high rank journal and Jeff's explanation on why the FFT input has the same shape as the output has shown me that I made there a fundamental mistake using FFT? (Luckily, thats just a side note of the paper and even if this is mentioned by the reviewers the research as a whole remains valid).
    But thanks again Jeff for teaching me again fundamental stuff that I overlook sometimes :D

  • @patrickes4540
    @patrickes4540 Рік тому +3

    Jeff, please apply This to vocals. Maybe a live visualisation for a singer. Or an Analysis afterwards. Interesting points are hitting the correct notes or vibrato Analysis. This would be sensational!

    • @HeatonResearch
      @HeatonResearch Рік тому +2

      Maybe, that is a bit more complex. Right now I am mostly focused on separating out the considerable noise from just the instrument tracks.

  • @Aadityashankar
    @Aadityashankar Рік тому +12

    Hello Mr.Jeff, your explanation and code are absolutely wonderful. But, rather than producing output as animation, how can we store the recognized notes in a list? Can you please do that. Thanks in advance!!

    • @jakubokua2083
      @jakubokua2083 11 місяців тому +1

      Did you found anything about how to do IT by any chance?

  • @robertobaldizon6010
    @robertobaldizon6010 9 місяців тому +1

    Thank you very much. Very clear.

  • @clementsoullard
    @clementsoullard 22 дні тому

    Very didactic ! Thanks

  • @micaelbh
    @micaelbh Рік тому +1

    Excellent teaching and explanation. Do you have any example also to recognize musical chords?

  • @vladimirbosinceanu5778
    @vladimirbosinceanu5778 Рік тому +1

    Great vid! Thank you

  • @gabrielleiva5944
    @gabrielleiva5944 Рік тому +1

    Awesome video! It would be awesome to extract MIDI information from Audio.

    • @HeatonResearch
      @HeatonResearch Рік тому +2

      That is kind of what I am doing on a related project.

    • @gabrielleiva5944
      @gabrielleiva5944 Рік тому

      I wonder if one of the biggest challenges facing this project would be with regards to accuracy.

  • @JustLikeHimFr
    @JustLikeHimFr Рік тому

    Pleaseee do the video where you perform frequency shifting or pitch shifting!!! I would love to see that, I would really appreciate it if you do.

  • @hivemindo1722
    @hivemindo1722 Рік тому +1

    Nice video! Maybe Realtime in Blender? Drivers in Geometry Nodes?

  • @ivansepulveda2018
    @ivansepulveda2018 Рік тому +1

    Hey Jeff! Great video. Question for you: Some Wav/FLAC files separate audio data by channel (left and right). How do you or how would you deal with that?
    Do you average the L/R channels? Do you FFT each channel separately?
    thanks!

  • @__--JY-Moe--__
    @__--JY-Moe--__ Рік тому +1

    yes it ''hertz'', hopefully not that often though! ha..ha.. pun! good luck! wow! wish I was taking the class!

    • @HeatonResearch
      @HeatonResearch Рік тому +1

      Debugging this certainly "hurtz" a few times.

  • @patdesse6693
    @patdesse6693 Рік тому +1

    Thanks a lot

  • @piusoblie2013
    @piusoblie2013 Рік тому +1

    What if I was reading the data directly from an audio input? Like a microphone and sound sensor. How do I read those notes?

  • @ilovenaturesound5123
    @ilovenaturesound5123 Рік тому

    Could you make a video on how to detect pitch accent of a word in languages like Japanese where each word consist of low and high pitch and determine the pitch of each syllable.
    Basically, I have an audio file of a person saying a Japanese word, ex. あめ (pronounce ah-me). I want to determine the pitch accent of each syllable whether it's High(H) or Low(L). In the case of あめ(ah-me) (two syllables), the result could be HH, HL, LH, LL. Please keep in mind that it should measure pitch relatively with other syllables. So actually HH and LL might not be appropriate so the result could be 3 cases: NeutralNeutral, HL, LH, rather that 4 cases.
    If the result is HL, it would would mean "雨(rain). But if the result is LH, it would mean "飴(candy)".

  • @rhard007
    @rhard007 Рік тому +1

    Hi Jeff, is there a way to use deep learning to identify the frequencies instead of using FFTs. Which one is more computation expensive?

    • @Geryf
      @Geryf Рік тому +1

      Dee learning would be way way less expensive, so long as a model is trained on with various FFTs and labels notes. Calling the model would take way fewer computations than iterating over an FFT and finding the top N notes by doing a peak analysis.
      I’m looking to start training a model. Issue is every instrument has their own distinct overtones.

  • @SA-oj3bo
    @SA-oj3bo 3 місяці тому

    Hi Jeff, I would like a solution that outputs the main frequency and amplitude in real time of what I am humming into a microphone. Any suggestion how ot do that? Thanks in advance!

  • @matshagstrom9839
    @matshagstrom9839 2 місяці тому

    I’m looking for a computer program that can record the Notes. I’m singing while humming to a drum beat listening to headphones.
    Any suggestions?

  • @stevenroyalton7789
    @stevenroyalton7789 6 днів тому

    The chicken man

  • @DIYRobotGirl
    @DIYRobotGirl 11 місяців тому

    Could this work with the ability to store a song or a piece of a song in a program and the inverse kinematics of a robot could dance to that song. If the kinematics were like raise an arm or elbow or kneck when music does a thing the kinematics could do that. Maybe even have a robot to dance a skit to live music if the music is recorded where the computer can read the music in code and match kinematics with that code. I have been trying to figure out how music or song could be put in machine readable code and not play on a mp3. Somewhat that a program could actually know the song in code to put to kinematics of multiple servos.

  • @Geryf
    @Geryf Рік тому

    Hey Jeff! I love your video, just wondering if you've considered using Cepstral processing or the Harmonic Product Spectrum algorithm to determine the within each individual fft? I'm trying the peak analysis as shown in your code and it's seems to be very buggy with even with monophonic instruments. (Some overtones are higher than their base frequency which effects peak detection). Please let me know what you think, thank you!

    • @ivansepulveda2018
      @ivansepulveda2018 Рік тому +1

      @Gery if you have a github for this I'd love to see it! (Apologies if this is a duplcate comment, UA-cam is acting weird on me)

  • @thewarhammer6606
    @thewarhammer6606 Рік тому +1

    FREE-quency.

    • @dslayer218
      @dslayer218 7 місяців тому

      Exactly. Its so difficult to focus on whats being said when he's constantly saying frinquency

  • @TheGroundskeeper
    @TheGroundskeeper Рік тому

    Holy God i haven't seen your videos in a couple years and didn't recognize you at all