Thanks, Jeff, I find your video really helpful as a hobbyist programmer and musician.
Could you please do a video about automatic guitar transcription - For Example : show the notes played on the guitar in real time on the virtual guitar fretboard and might as well save it as a video/gif file for later viewing?
I am a guitarist and I have been manually writing the tabs - having the transcription done automatically with the use of Python will be a great time saver :)
Thanks for your really informative and interesting videos
Greetings
This was awesome! Thanks!
Can we just ignore the point, that I just submitted a publication to a high rank journal and Jeff's explanation on why the FFT input has the same shape as the output has shown me that I made there a fundamental mistake using FFT? (Luckily, thats just a side note of the paper and even if this is mentioned by the reviewers the research as a whole remains valid).
But thanks again Jeff for teaching me again fundamental stuff that I overlook sometimes :D
Jeff, please apply This to vocals. Maybe a live visualisation for a singer. Or an Analysis afterwards. Interesting points are hitting the correct notes or vibrato Analysis. This would be sensational!
Maybe, that is a bit more complex. Right now I am mostly focused on separating out the considerable noise from just the instrument tracks.
Hello Mr.Jeff, your explanation and code are absolutely wonderful. But, rather than producing output as animation, how can we store the recognized notes in a list? Can you please do that. Thanks in advance!!
Thank you very much. Very clear.
Very didactic ! Thanks
Excellent teaching and explanation. Do you have any example also to recognize musical chords?
Awesome video! It would be awesome to extract MIDI information from Audio.
I wonder if one of the biggest challenges facing this project would be with regards to accuracy.
Pleaseee do the video where you perform frequency shifting or pitch shifting!!! I would love to see that, I would really appreciate it if you do.
Nice video! Maybe Realtime in Blender? Drivers in Geometry Nodes?
Hey Jeff! Great video. Question for you: Some Wav/FLAC files separate audio data by channel (left and right). How do you or how would you deal with that?
Do you average the L/R channels? Do you FFT each channel separately?
thanks!
yes it ''hertz'', hopefully not that often though! ha..ha.. pun! good luck! wow! wish I was taking the class!
What if I was reading the data directly from an audio input? Like a microphone and sound sensor. How do I read those notes?
Could you make a video on how to detect pitch accent of a word in languages like Japanese where each word consist of low and high pitch and determine the pitch of each syllable.
Basically, I have an audio file of a person saying a Japanese word, ex. あめ (pronounce ah-me). I want to determine the pitch accent of each syllable whether it's High(H) or Low(L). In the case of あめ(ah-me) (two syllables), the result could be HH, HL, LH, LL. Please keep in mind that it should measure pitch relatively with other syllables. So actually HH and LL might not be appropriate so the result could be 3 cases: NeutralNeutral, HL, LH, rather that 4 cases.
If the result is HL, it would would mean "雨(rain). But if the result is LH, it would mean "飴(candy)".
Hi Jeff, is there a way to use deep learning to identify the frequencies instead of using FFTs. Which one is more computation expensive?
Dee learning would be way way less expensive, so long as a model is trained on with various FFTs and labels notes. Calling the model would take way fewer computations than iterating over an FFT and finding the top N notes by doing a peak analysis.
I’m looking to start training a model. Issue is every instrument has their own distinct overtones.
Hi Jeff, I would like a solution that outputs the main frequency and amplitude in real time of what I am humming into a microphone. Any suggestion how ot do that? Thanks in advance!
I’m looking for a computer program that can record the Notes. I’m singing while humming to a drum beat listening to headphones.
Any suggestions?
The chicken man
Could this work with the ability to store a song or a piece of a song in a program and the inverse kinematics of a robot could dance to that song. If the kinematics were like raise an arm or elbow or kneck when music does a thing the kinematics could do that. Maybe even have a robot to dance a skit to live music if the music is recorded where the computer can read the music in code and match kinematics with that code. I have been trying to figure out how music or song could be put in machine readable code and not play on a mp3. Somewhat that a program could actually know the song in code to put to kinematics of multiple servos.
Hey Jeff! I love your video, just wondering if you've considered using Cepstral processing or the Harmonic Product Spectrum algorithm to determine the within each individual fft? I'm trying the peak analysis as shown in your code and it's seems to be very buggy with even with monophonic instruments. (Some overtones are higher than their base frequency which effects peak detection). Please let me know what you think, thank you!
@Gery if you have a github for this I'd love to see it! (Apologies if this is a duplcate comment, UA-cam is acting weird on me)
FREE-quency.
Exactly. Its so difficult to focus on whats being said when he's constantly saying frinquency
Holy God i haven't seen your videos in a couple years and didn't recognize you at all
thank u jeff u just saved me and my math grade 👍