hey sorry for irrelevance but do you know how to recreate a sound similiar to the one at 0:09 in this song - ua-cam.com/video/-VXIcgf8biU/v-deo.html&ab_channel=IDJVideos.TV thankss
@@broland115 there was never a debate bruv, just a lil' bunch of retards that said shit like "skrillex is brostep" (which is not actually even a genre of music or anything) without actually cringing
Yeah I’m a little disappointed. This was more like teaching a computer what dubstep is and then not using it to produce dubstep and knowing that it will also not appreciate the education 😂
To be fair generative tasks like making music are really difficult. You might be interested in the OpenAI jukebox openai.com/blog/jukebox/ which does a pretty good job. But it still has a lot of audio artifacts even if it's made by really smart people who know a lot more about AI.
@@IQuick143cz **edit** I just realized that your link is something similar to what I'm describing.** - Check out the video AI makes a Nirvana song. It was kind of cool. Buuut, it was basically just a mish mash of rearranged Nirvana riffs, pitched up and down to match and put in a song like structure. Like one moment it would vaguely sound like In Bloom, the next part would sound vaguely like Polly. But the lyrics were strangely Cobain-esque. I think AI will be good at recreating the style of established music one day, but... I don't think it will be able to create anything original AND good. It won't be able to write anything of enough quality that it's actually worth listening to, or be able to replace artists. To do that it would have to have the creative, original thought programmed into it, which ultimately is a human. At least, it won't be able to do that anytime remotely soon.
Thanks for the shoutout! 100% agree that knowing the math is good but not required to make something amazing. Also agreed you should up the model complexity. Try adding more layers, neurons per layer, and different activation functions (ReLU/tanh/sigmoid). Nice work! Subscribed.
I recently wrote my dissertation on BPM analysis using neural networks and used a lot of similar techniques. Dylan did a surprisingly good job of understanding and explaining everything here. Good job! Some improvements you could make: - You definitely overfit by training for so long, that was a lot of wasted time. A common technique is to use an early stopping callback to stop training after a certain number of epochs without improvement. - In my research I found that longer clips worked better, although that might not be the case for this network it could be worth experimenting with 5 and 10 second clips. - There is very little publicly available training data for this stuff and so creating your own could help expand the dataset. I did this by exporting my rekordbox collection to XML format and parsing it with a python script to produce training data. - There's some very interesting research that suggests using 1D convolutional layers oriented along the frequency axis could drastically improve a model of this type.
Fancy seeing a grapple god here, just wanted to say thanks for your Apex series of videos. They made me passionate about the movement system in that game
YESS I love how in depth you go with your videos. They're well edited and fun to watch but I'm also actually learning a ton of new shit at the same time.
I'm a PhD student studying auditory neuroscience and was not expecting to get a lecture on MFCCs from this channel 😂 but you did a great job explaining and yes i agree the naming convention on cepstrum/etc is dumb
This was pretty cool actually. Imagine feeding an AI a bunch of Skrillex tracks and getting it to auto generate random Skrillex sounding dubstep. That's an idea I've had for a while but am too stupid to try :C
I think a part of the accuracy problem stems from sample diversity, but another factor that would prevent getting higher accuracy would be the fact that music has periods of rest, when sound is absent. If an interval occurs during a period of rest, it would probably mess up analysis.
but seriously this thing was vey educative.I HAD TO LITERALLY PAUSE EVERY 2 SECONDS TO COMPREHEND WHAT Dylan was really talking about and i have to say it was worth it.It's amazing how you can turn hours of learning into minutes.
EDIT: Thanks to @@UCFpUx-4O2zgsOM0Wp0HRTqw for the help. I would guess the problem is not in the model. Considering Spleeter actually does better at processing non-electronic music, it seems that those songs tend to be harder to nail at close to perfect accuracy. Probably because the genre of electronic music itself is full of external influences. Clap samples can be especially hard, since they can have unique characteristics in certain genres, which can make them sound close to snares in other genres. Note : I'm just a regular CS student.
@@karakunai_dev I would say it'd be best if it was written like: "I would guess the problem is not in the model. Considering Spleeter actually does better at processing non-electronic music, it seems that those songs tend to be harder to nail at close to perfect accuracy. Probably because the genre of electronic music itself is full of external influences. Clap samples can be especially hard, since they can have unique characteristics in certain genres, which can make them sound close to snares in other genres. Note : I'm just a regular CS student." i guess
The interesting thing about MFCC and cepstrums, quefrencies and all of that mixed up letters jazz, is that the transformation that MFCC makes with the Mel filter-banks brings the sound in this specific domain that isn't frequency nor time domain. And that's why they decided to call them funky names. Scientists are fun aren't they? xD
Your visual way of explaining the cepstrum is actually amazing. I finally understood it intuitively after two Comp Sci university courses where I didn't get it.
Unrelated story, a year ago maybe I was listening to dubstep with a crappy EarPods so the sound leaks out and my mom heard it and was like “are you listening to house music” damn mom, it was Excision
If you haven't looked into them already, convolutional neural networks are often used for continuous or variable sized input data. These networks are made up of kernels (or windows) that slide across your input domain and output some value. Often times, these windows tend to learn patterns within your data (say you ran a CNN over numerical images, one window might have learned to recognize vertical lines and output a large value when it sees one, indicative of the number 1,4,7, etc...). The models typically contain many of these convolutional windows/kernels which learn different things. In your case, librosa has processed your wav files into the beautiful graph you show at 6:50. This is essentially a big picture that you can input into your CNN. Loved watching your video, you are doing some cool stuff. Feel free to shoot me a message if you'd like to talk a bit more about AI and music! I work at the company behind rave.dj, an AI that tries to mash up any two songs of your choice!
It’s actually funny how you’ve stumbled upon AI and signal processing. I’ve been messing around on FL since I was 15 and once I started an engineering course in uni I quickly got into signal processing. Towards the end of the course stream they naturally lead onto the introduction of ML. It’s really fun, and I don’t think I would have gotten into it if I hadn’t invested time into music production.
Wow, that is some serious dedication. Perhaps one day you will be able to create a style transfer application, to turn a song into a different genre, or even automatically master a song.
The problem sometimes in neural networks it’s the overfit. So be careful with the data and validation sets. 1 - Overfitting happens because sometimes the train value and the test value match too much. On Neural Networks if you select a high epoch values could give you a bad prediction. Maybe not on the first ones but on the last ones it will be a mess. It all depends on the dataset and how much you train your network. REMEMBER. MORE COMPLEX DOESN’T MEAN MORE EFFICIENCY 2 - I am going to kill Dylan. Wtf you explain on the cepstrum part bro. It’s not even close. The cepstrum gives you on time the repetition period of the signal. And the first part always represents the Harmonic response. It’s on milliseconds because it’s the absolute inverse of Fourier Transform =Time (Fourier Transform = Frequency) PD: Thanks for the intention of the video. Telecommunications Engineers appreciate the effort. If you want to applied more projects like this visit AudiasLab of University Autónoma Madrid, EPS.
dagambler999 as you say the inverse of FFT is IFFT and that gives you the signal on time. Signal on time FFT(signal) = signal on frequency IFFT(signal on freq)= signal on time. But cepstrum is something apart is a representation of periodicity and the harmonics on the first part. Hope you understand better :)
i think the next step would be to train a model on learning music theroy : scales,modes,chords etc...and see what the results would be.This might generate interesting chord progressions or even come up with it's own scales and modes.That could be the starting point,this can be expended to other tuning systems and so on
One thing I noticed: Some samples in my sample libraries are kinda wrongly named… there are claps that sound more like snares and vice versa, and some closed hi hats have longer tails than open ones. So, a program training on these samples might get them wrong because of this weird naming and not because the program is bad.
Yes I have that exact same thing! Especially with some of the clap/snare vengeance samples. While I could have gone through each sample individually to make a better split, I also didn't use that many samples (about 350 samples for each class). If I had way more samples to train with, it would probably do a lot better too.
I totally thought this was going to be about getting an AI to generate new dubstep tracks based on its learnings and I'm slightly disappointed that it isn't
The quefrency is in ms because a Fourier transformation transforms time (s) to frequency (1/s = Hz). So another Fourier transformation (to the cepstrum) would transform the new "time" domain (which is in 1/s) into the quefrency domain (1/[1/s] = s).
I think this might be wrong. Correct me if you find a better explanation. Remember that the inverse Fourier transform (F^-1) is not equal to the Fourier transform (F) itself. Therefore, the second applied Fourier transform does not transform the frequency signal back to the original time domain. Quoting Wikipedia: "The independent variable of a cepstral graph is called the quefrency. The quefrency is a measure of time, though not in the sense of a signal in the time domain."
@@tune_m that's what I said. The inverse Fourier transformtion is of the same type but not exactly the same as the Fourier transformation. It is the time domain as in "same unit of time" not as in "same meaning like time".
This video is what the internet should look like. Funny, personal, but still informative, full of references, well-edited video- and audio-wise, well-explained, and not full of sponsors, VPNs and Raids shadow legends. Thank you Dylan.
Hey Dylan, try using Jupyter notebook for working with data sciency stuff, it is way more interactive and pleasant to use than running plain python scripts. You can install it yourself or use google colab, jupyter cloud runtime that google hosts for free for people to play around with machine learning
Also, why on earth would you import load, save and asarray separately? I get it when there is a really long nested object in tf that you want to use, but those imports save you split second of typing "np.", so why??? D:
I've used Jupyter Notebook a lot! It's good just annoying sometimes. I'll end up breaking the code somehow and then have to re-run every cell over again. Also lol good point. I was just copying that from a tutorial so just mashed them together without really thinking about it.
@@DylanTallchief there is a button to restart kernel and run all cells, might use that when something goes wrong, that interactivity comes in very handy when trying to iterate quickly and try stuff without waiting for all the preprocessing to finish every time. Also want to suggest lurking on kaggle. Find competitions about topics you enjoy and start looking at public notebooks, there are a lot of info and tricks to be learned there. Or you can even try participating in those competitions yourself!
For a sample classifier you could use check out computer vision techniques, such as the OpenL3 model. It directly uses the log-mel spectrogram images and is pretty powerful! This particular network looks at spectrograms of 1s though, so for songs probably use MFCC and other features extracted from the song (the things shazam uses)
As to why the validation performance doesn't necessarily go higher if training increases: this is something called overfitting. If your network has too many parameters, it learns to fit the training data so well, that having a slightly different input (your validation set) messes it up. This is the problem of generalization. You want to prevent overfitting by using techniques as dropout and weight normalization. Also make sure that your datasets have a similar distribution of classes. If the network sees 90% dubstep and 10% hardstyle during training, and you validate it on 10% dubstep and 90% hardstyle, it will for sure not work as well as if both were 50/50. The weights have been tuned more specifically to the dubstep features
It is not uncommon for validation accuracy to go higher than training accuracy, because the training process is random. The model might have randomly found a configuration that happens to perform better on the validation set. A sub-100% training accuracy is not necessarily a bad thing, maybe your data is not perfectly clean (like the skrillex example you mention at 12:43). Also, getting the training accuracy higher is not necessarily a good thing because you might be forcing the model to overfit (which you also mention at 13:28), and you might find the validation accuracy gets even worse.
This was especially interesting as I wrote my Bachelor thesis in computer science on this exact subject; genre classification of music. But we used other models than neural networks. Also we tried a bunch more features than MFCC.
Spectrum analysis is the best way to go, how ears work (evolution usually homes in on the simplest method). Can't wait to see you do an automatic music decomposer / re-composer (with instrument type determination?). If you can do it in excel, I'd be gobsmacked. As for AI replacing humans, not any time soon, AI has no soul (yet).
I have been procrastinating about this music player that uses machine learning and haven't.even tried yet, and here you are making music and doing this stuff...
Man, you did really well for a beginner! There were a few things that you could have done to improve the model. For example, make the convolutions span the whole frequency domain. (Kernel of number of frequencies by 3). Some one at Spotify made an article about it.
Some recommendations: Jukebox from OpenAI, 2020 - AI generates music from a given style & even lyric. DDSP from Magenta (Google), 2020 - They trained additive synth to sound like a violin, but you can also do other kinds of stuff such as speech synthesis, dereverberation & reverberation transfer, timbre transfer ... etc. FlowSynth from IRCAM, 2019 - Neural network was trained to learn the sound of u-he Diva synth and make macro parameters that are 'perceptually continuous.' You can even download and try it with their M4L devices if you have Diva installed.
Some things you can try are: augmenting the data (e.g. by adding random noise to the inputs or even internal layers), modifying the network while training (adding / removing neurons to see if your network is currently overfitted/underfitted), get more data, try completely different model designs (like LSTM, CNN, etc).
I did something similar a while ago (extract feature from a series of 1d data - a series of numbers read from a couple of sensors with respect to time, but they do form rhythms like audio - audio files are basically time series). Afaik MFCC is widely used in speech recognition, it is possible that it drops some vital feature about time and frequency about the specific genre you are working with during the transformation (like some drum patterns - something that is not reflected well in the MFCC features but helpful for determining the genre). Since you have 5 hours of samples, it seems feasible to use deep learning - use a deep 1d convolution network to make model learn the features by its own (i.e. making your own MFCC or whatever you call it, a lot of new features that your model learned to extract from the raw data) - or LSTM (works well with short samples). These are possible ways to increase the accuracy above the limit.
A paper recently came out about bistable neural networks, which are much better than LSTMs and GRUs at capturing long-term dependencies, which seems a perfect fit for music. They also converge a lot faster.
Humans: "AI will enslave all of humanity!"
Meanwhile, AI: "Skrillex is a jazz"
@@w花b Lmfao
Missing first half of your comment being
"2025: AI destroyed the humanity"
2020:"
lol
@@Lol-os2bo lol
hey sorry for irrelevance but do you know how to recreate a sound similiar to the one at 0:09 in this song - ua-cam.com/video/-VXIcgf8biU/v-deo.html&ab_channel=IDJVideos.TV thankss
"i made a DAW in Excel but thats it...."
lmaooo the classic undersell
I am the 1000th like
Good day
Yeah man it’s not like it’s a huge accomplishment. I can’t even use patcher to make a basic effect
*taking notes*
- being Humble while doing something impressive is impressive .
AI: This kick is a kick
Also AI: Skrillex is a dubstep
Scary, I know.
this debate is closed now. :))
@@broland115 there was never a debate bruv, just a lil' bunch of retards that said shit like "skrillex is brostep" (which is not actually even a genre of music or anything) without actually cringing
Why isn't Dubstep called Wubstep?
Hmmm yes the floor is made out of floor
AI listens to Dylan's music for 24 hours and replaces him
🤣
wait I think I know you??
More like for eternity because he makes all kinds of music
and replaces his channel
I would pay to have an AI do that to my music
_SKRILLEX IS A DUBSTEP_
Skrillex is a hardstyle
Skrillex is a jazz
Skrillex is a classical
Skrillex is a president of Antarctica
Skrillex is Obama
thumbs up if you thought it was gonna make dubstep
Yeah I’m a little disappointed. This was more like teaching a computer what dubstep is and then not using it to produce dubstep and knowing that it will also not appreciate the education 😂
To be fair generative tasks like making music are really difficult. You might be interested in the OpenAI jukebox openai.com/blog/jukebox/ which does a pretty good job. But it still has a lot of audio artifacts even if it's made by really smart people who know a lot more about AI.
@@IQuick143cz **edit** I just realized that your link is something similar to what I'm describing.**
- Check out the video AI makes a Nirvana song. It was kind of cool. Buuut, it was basically just a mish mash of rearranged Nirvana riffs, pitched up and down to match and put in a song like structure. Like one moment it would vaguely sound like In Bloom, the next part would sound vaguely like Polly. But the lyrics were strangely Cobain-esque. I think AI will be good at recreating the style of established music one day, but... I don't think it will be able to create anything original AND good. It won't be able to write anything of enough quality that it's actually worth listening to, or be able to replace artists. To do that it would have to have the creative, original thought programmed into it, which ultimately is a human. At least, it won't be able to do that anytime remotely soon.
@@d-rockanomaly9243 please God please please be right i am sooooo scared
Check out OpenAI Jukebox. It does just that.
Skrillex is Jazz
Skrillex: opening Serum
Dude are you from Auackland? From NZ
Lmao 😂😂
@@fllnstrs auqland*
@@portemanteau3802 *Aqualand
Cristian Bianchi Cockland*
My mans just threw a whole college study in 14 minutes
Jatog
Phd
more like intro course, first week :) - Comp Sci and AI undergrad
@@SquishySwishy definitely not my man
Damn bro, clicked on this video to watch you torture a robot, and now you're making me learn stuff? I didn't agree to this 😤
the viewers were the ones actually getting tortured the whole time!
@@DylanTallchief Shit, we got played lol
Dylan Tallchief tortures a robot for who knows how long
Honestly I fully expected this to be in excel
don't we all?
Thanks for the shoutout! 100% agree that knowing the math is good but not required to make something amazing. Also agreed you should up the model complexity. Try adding more layers, neurons per layer, and different activation functions (ReLU/tanh/sigmoid). Nice work! Subscribed.
everybody gansta until dylan tallchief tricks you into taking a math lesson
Not again D:
RIP to all the people who thought we were gonna hear the AI make its own dubstep at the end
Elon will wake XÆA-12 up with dubstep
Funfact: the "a" in his/her name stands for the dubstep track archangel by burial.
@@isaygg.butitwasntgg.itwasb4958 that makes it even better
He' actually liking some content of Dubtep producers on different platforms.
Waking up and walking to school with bangarang playing in the background.
XÆA-12 is a dubstep
I recently wrote my dissertation on BPM analysis using neural networks and used a lot of similar techniques. Dylan did a surprisingly good job of understanding and explaining everything here. Good job!
Some improvements you could make:
- You definitely overfit by training for so long, that was a lot of wasted time. A common technique is to use an early stopping callback to stop training after a certain number of epochs without improvement.
- In my research I found that longer clips worked better, although that might not be the case for this network it could be worth experimenting with 5 and 10 second clips.
- There is very little publicly available training data for this stuff and so creating your own could help expand the dataset. I did this by exporting my rekordbox collection to XML format and parsing it with a python script to produce training data.
- There's some very interesting research that suggests using 1D convolutional layers oriented along the frequency axis could drastically improve a model of this type.
School: Here's a Summer break!
UA-cam: Yes, but actually no.
In 25 years some revisionist is going to find this video and charge you with torturing and exploiting AI and you'll be cancelled.
Lmao
Rokos Basilisk. He's already committed his crime as did a lot of us.
@@perigee9281 Lmao
What i learned today: Music is just waves that go like wo-wo-wowowowo up and down. up and down. But just very many of them together. Nice :)
The tighter the waves, the higher the pitch is
everything you have ever heard is just a collection of sine waves
Fancy seeing a grapple god here, just wanted to say thanks for your Apex series of videos. They made me passionate about the movement system in that game
Ha ha sound go brrr
This is dope, super interesting stuff. Not to many people are making content like this and I dig it
In the future there will be no human voice actors.
Not soon enough
That'd be cool. Imagine developing your own game or animation, designing each character's voice and not have to record anything
@@Gabriel-mw5ro That is already possible actually
go away you postshumanist technocrat
this post was made by anprim tribe
That's scary
when u think that playing drums in DOOM is too much , he gives u this
Next time, I'll try telling my math teacher that "it does some equations"
IM SO READY TO HEAR AI MADE DUBSTEP OMG
You here?🤔
YESS I love how in depth you go with your videos. They're well edited and fun to watch but I'm also actually learning a ton of new shit at the same time.
Can't wait for your AI to produce a dubstep track!
working on it :D...
I'm a PhD student studying auditory neuroscience and was not expecting to get a lecture on MFCCs from this channel 😂
but you did a great job explaining and yes i agree the naming convention on cepstrum/etc is dumb
Thank you so much for this!! :) great video. Even with a basic understanding of generic NN’s it’s intimidating to try to apply it to music imo
This was pretty cool actually. Imagine feeding an AI a bunch of Skrillex tracks and getting it to auto generate random Skrillex sounding dubstep. That's an idea I've had for a while but am too stupid to try :C
I think a part of the accuracy problem stems from sample diversity, but another factor that would prevent getting higher accuracy would be the fact that music has periods of rest, when sound is absent. If an interval occurs during a period of rest, it would probably mess up analysis.
but seriously this thing was vey educative.I HAD TO LITERALLY PAUSE EVERY 2 SECONDS TO COMPREHEND WHAT Dylan was really talking about and i have to say it was worth it.It's amazing how you can turn hours of learning into minutes.
EDIT: Thanks to @@UCFpUx-4O2zgsOM0Wp0HRTqw for the help.
I would guess the problem is not in the model. Considering Spleeter actually does better at processing non-electronic music, it seems that those songs tend to be harder to nail at close to perfect accuracy. Probably because the genre of electronic music itself is full of external influences. Clap samples can be especially hard, since they can have unique characteristics in certain genres, which can make them sound close to snares in other genres.
Note : I'm just a regular CS student.
english intensifies
@@braznem Well I'm sorry if I couldn't have write it better. Any suggestions?
@@karakunai_dev too long, got other things to do D:, sorry.
@@karakunai_dev I would say it'd be best if it was written like: "I would guess the problem is not in the model. Considering Spleeter actually does better at processing non-electronic music, it seems that those songs tend to be harder to nail at close to perfect accuracy. Probably because the genre of electronic music itself is full of external influences. Clap samples can be especially hard, since they can have unique characteristics in certain genres, which can make them sound close to snares in other genres.
Note : I'm just a regular CS student." i guess
I love this type of stuff, keep it up!
That ProQ visualizer really justifies the video being 60fps
this is EXACTLY WHAT I WAS LOOKING FOR
The interesting thing about MFCC and cepstrums, quefrencies and all of that mixed up letters jazz, is that the transformation that MFCC makes with the Mel filter-banks brings the sound in this specific domain that isn't frequency nor time domain. And that's why they decided to call them funky names. Scientists are fun aren't they? xD
Fruit fly researchers tho
Your visual way of explaining the cepstrum is actually amazing. I finally understood it intuitively after two Comp Sci university courses where I didn't get it.
You are a very entertaining person my dude..thanks for the effort you put into your ideas and videos.
You know, this sparked my interest into A.I research or at least the basics of it, thank you
Really interesting. This video deserves more views. Seems like a lot of work was put into this
My boy Dylan out here tricking people into loving maths
Unrelated story, a year ago maybe I was listening to dubstep with a crappy EarPods so the sound leaks out and my mom heard it and was like “are you listening to house music” damn mom, it was Excision
so you spent a year an a third part of it making an AI to listen dubstep, STONKS
If you haven't looked into them already, convolutional neural networks are often used for continuous or variable sized input data. These networks are made up of kernels (or windows) that slide across your input domain and output some value. Often times, these windows tend to learn patterns within your data (say you ran a CNN over numerical images, one window might have learned to recognize vertical lines and output a large value when it sees one, indicative of the number 1,4,7, etc...). The models typically contain many of these convolutional windows/kernels which learn different things.
In your case, librosa has processed your wav files into the beautiful graph you show at 6:50. This is essentially a big picture that you can input into your CNN.
Loved watching your video, you are doing some cool stuff. Feel free to shoot me a message if you'd like to talk a bit more about AI and music! I work at the company behind rave.dj, an AI that tries to mash up any two songs of your choice!
It’s actually funny how you’ve stumbled upon AI and signal processing. I’ve been messing around on FL since I was 15 and once I started an engineering course in uni I quickly got into signal processing. Towards the end of the course stream they naturally lead onto the introduction of ML. It’s really fun, and I don’t think I would have gotten into it if I hadn’t invested time into music production.
Wow, that is some serious dedication. Perhaps one day you will be able to create a style transfer application, to turn a song into a different genre, or even automatically master a song.
The problem sometimes in neural networks it’s the overfit. So be careful with the data and validation sets.
1 - Overfitting happens because sometimes the train value and the test value match too much.
On Neural Networks if you select a high epoch values could give you a bad prediction. Maybe not on the first ones but on the last ones it will be a mess.
It all depends on the dataset and how much you train your network.
REMEMBER. MORE COMPLEX DOESN’T MEAN MORE EFFICIENCY
2 - I am going to kill Dylan. Wtf you explain on the cepstrum part bro. It’s not even close. The cepstrum gives you on time the repetition period of the signal. And the first part always represents the Harmonic response.
It’s on milliseconds because it’s the absolute inverse of Fourier Transform =Time
(Fourier Transform = Frequency)
PD: Thanks for the intention of the video. Telecommunications Engineers appreciate the effort.
If you want to applied more projects like this visit AudiasLab of University Autónoma Madrid, EPS.
regularization!
I thought the inverse of a FFT is an IFFT, and that cepstrum is an FFT of a spectrum.
dagambler999 as you say the inverse of FFT is IFFT and that gives you the signal on time.
Signal on time
FFT(signal) = signal on frequency
IFFT(signal on freq)= signal on time.
But cepstrum is something apart is a representation of periodicity and the harmonics on the first part.
Hope you understand better :)
AI: Welcome back to a new video *sips tea*
This guy learned programming for music to get ideas for youtube!
Now I want to hear what Hardstyle Jazz Trance would sound like
i think the next step would be to train a model on learning music theroy : scales,modes,chords etc...and see what the results would be.This might generate interesting chord progressions or even come up with it's own scales and modes.That could be the starting point,this can be expended to other tuning systems and so on
Damn, I was hoping the ai would generate a dubstep track based on what it had heard.
One thing I noticed: Some samples in my sample libraries are kinda wrongly named… there are claps that sound more like snares and vice versa, and some closed hi hats have longer tails than open ones. So, a program training on these samples might get them wrong because of this weird naming and not because the program is bad.
Yes I have that exact same thing! Especially with some of the clap/snare vengeance samples. While I could have gone through each sample individually to make a better split, I also didn't use that many samples (about 350 samples for each class). If I had way more samples to train with, it would probably do a lot better too.
I totally thought this was going to be about getting an AI to generate new dubstep tracks based on its learnings and I'm slightly disappointed that it isn't
Not yet, but this is the first step towards that goal
The quefrency is in ms because a Fourier transformation transforms time (s) to frequency (1/s = Hz). So another Fourier transformation (to the cepstrum) would transform the new "time" domain (which is in 1/s) into the quefrency domain (1/[1/s] = s).
Ah yes, I think I understand. Thanks!
I think this might be wrong. Correct me if you find a better explanation. Remember that the inverse Fourier transform (F^-1) is not equal to the Fourier transform (F) itself. Therefore, the second applied Fourier transform does not transform the frequency signal back to the original time domain.
Quoting Wikipedia: "The independent variable of a cepstral graph is called the quefrency. The quefrency is a measure of time, though not in the sense of a signal in the time domain."
@@tune_m that's what I said. The inverse Fourier transformtion is of the same type but not exactly the same as the Fourier transformation. It is the time domain as in "same unit of time" not as in "same meaning like time".
Me : Hey Dylan you create music, code or AI ?
Dylan : *YES*
I am not a gifted coder:…..
Dylan: Makes a whole site generating music patterns
also dylan: uncomments a comment which breaks the site for 20 minutes
You and Sebastian are two of my favorite tubers
funny that you found each other
Dylan you are a legend. This was highly entertaining.
I've been hesitant into getting into AI because it seems to hard, but damn it I going in, thanks Dylan
this is so cool. excited to learn more about this
I would like to see this taken more in depth to where AI will learn to produce a track or even just a beat or drop
Bruh i hope you get your model to generate actual dubstep sometime x)
This would be really useful in the rythm game I've been developing
This video is what the internet should look like. Funny, personal, but still informative, full of references, well-edited video- and audio-wise, well-explained, and not full of sponsors, VPNs and Raids shadow legends. Thank you Dylan.
This is the most awesomely nerdy thing I've seen all day. I thank you
Alright now we need an AI that can create random professional dubstep.
As a computer science enthusiast and a producer this video is amazing.
You are really smart Dylan.
Hey Dylan, try using Jupyter notebook for working with data sciency stuff, it is way more interactive and pleasant to use than running plain python scripts. You can install it yourself or use google colab, jupyter cloud runtime that google hosts for free for people to play around with machine learning
Also, why on earth would you import load, save and asarray separately? I get it when there is a really long nested object in tf that you want to use, but those imports save you split second of typing "np.", so why??? D:
I've used Jupyter Notebook a lot! It's good just annoying sometimes.
I'll end up breaking the code somehow and then have to re-run every cell over again.
Also lol good point. I was just copying that from a tutorial so just mashed them together without really thinking about it.
@@DylanTallchief there is a button to restart kernel and run all cells, might use that when something goes wrong, that interactivity comes in very handy when trying to iterate quickly and try stuff without waiting for all the preprocessing to finish every time.
Also want to suggest lurking on kaggle. Find competitions about topics you enjoy and start looking at public notebooks, there are a lot of info and tricks to be learned there. Or you can even try participating in those competitions yourself!
For a sample classifier you could use check out computer vision techniques, such as the OpenL3 model. It directly uses the log-mel spectrogram images and is pretty powerful! This particular network looks at spectrograms of 1s though, so for songs probably use MFCC and other features extracted from the song (the things shazam uses)
Also, training for more epochs does not necessarily help :P better would be to actually collect more data :)
As to why the validation performance doesn't necessarily go higher if training increases: this is something called overfitting. If your network has too many parameters, it learns to fit the training data so well, that having a slightly different input (your validation set) messes it up.
This is the problem of generalization. You want to prevent overfitting by using techniques as dropout and weight normalization. Also make sure that your datasets have a similar distribution of classes. If the network sees 90% dubstep and 10% hardstyle during training, and you validate it on 10% dubstep and 90% hardstyle, it will for sure not work as well as if both were 50/50. The weights have been tuned more specifically to the dubstep features
It is not uncommon for validation accuracy to go higher than training accuracy, because the training process is random.
The model might have randomly found a configuration that happens to perform better on the validation set.
A sub-100% training accuracy is not necessarily a bad thing, maybe your data is not perfectly clean (like the skrillex example you mention at 12:43).
Also, getting the training accuracy higher is not necessarily a good thing because you might be forcing the model to overfit (which you also mention at 13:28), and you might find the validation accuracy gets even worse.
Awesome. I‘m interested in more information about this.
Now make it generate new dubstep pieces
This was especially interesting as I wrote my Bachelor thesis in computer science on this exact subject; genre classification of music. But we used other models than neural networks. Also we tried a bunch more features than MFCC.
I understood less of that AI than I understand of PRO rappers. 10/10
comment for the algorithm, this video is incredible
What a great video to watch with dyscalculia
To increase accuracy you could increase the length of each sample, say to 4/8 seconds, you'd probably need more training material though.
Time for your AI to make Dubstep!
Please make an AI that makes dubstep, would be cool
Spectrum analysis is the best way to go, how ears work (evolution usually homes in on the simplest method). Can't wait to see you do an automatic music decomposer / re-composer (with instrument type determination?). If you can do it in excel, I'd be gobsmacked. As for AI replacing humans, not any time soon, AI has no soul (yet).
You're totally gonna get an A+ on this homework assignment.
Nice, now Excision has a way to make his concerts even more crazy
Probably the first time I've seen the words "Skrillex Is a Jazz"
this is the content that i want to see
I have been procrastinating about this music player that uses machine learning and haven't.even tried yet, and here you are making music and doing this stuff...
Dude I mean... Great job. I'm impressed. Can't wait to hear a dubstep song entirely produced by an AI
As a CS student this is awesome
You should make the AI listen to hip hop songs with a sample from another genre.
Hmm...The Snare here is made of SNARE!
This is a really interesting video and topic
Man, you did really well for a beginner! There were a few things that you could have done to improve the model. For example, make the convolutions span the whole frequency domain. (Kernel of number of frequencies by 3). Some one at Spotify made an article about it.
Cepstrum: Yo dawg, I heard you liked Fourier transforms. So I did a Fourier transform on your Fourier transform.
"Im not a great coder" , creates a DAW in excel , bro your one of the best
Some recommendations:
Jukebox from OpenAI, 2020 - AI generates music from a given style & even lyric.
DDSP from Magenta (Google), 2020 - They trained additive synth to sound like a violin, but you can also do other kinds of stuff such as speech synthesis, dereverberation & reverberation transfer, timbre transfer ... etc.
FlowSynth from IRCAM, 2019 - Neural network was trained to learn the sound of u-he Diva synth and make macro parameters that are 'perceptually continuous.' You can even download and try it with their M4L devices if you have Diva installed.
Some things you can try are: augmenting the data (e.g. by adding random noise to the inputs or even internal layers), modifying the network while training (adding / removing neurons to see if your network is currently overfitted/underfitted), get more data, try completely different model designs (like LSTM, CNN, etc).
Could you make an AI make some sick 2010 - 2012 skrillex dubstep?
only 9am and Dylan’s already hurting my brain
I did something similar a while ago (extract feature from a series of 1d data - a series of numbers read from a couple of sensors with respect to time, but they do form rhythms like audio - audio files are basically time series).
Afaik MFCC is widely used in speech recognition, it is possible that it drops some vital feature about time and frequency about the specific genre you are working with during the transformation (like some drum patterns - something that is not reflected well in the MFCC features but helpful for determining the genre).
Since you have 5 hours of samples, it seems feasible to use deep learning - use a deep 1d convolution network to make model learn the features by its own (i.e. making your own MFCC or whatever you call it, a lot of new features that your model learned to extract from the raw data) - or LSTM (works well with short samples). These are possible ways to increase the accuracy above the limit.
A paper recently came out about bistable neural networks, which are much better than LSTMs and GRUs at capturing long-term dependencies, which seems a perfect fit for music. They also converge a lot faster.