Man, you are a great engineer! Thank you for what you do! I would prefer 48kHz sampling though. Oh, yes, you should have used the remove DC offset before rendering. I think Reaper has this function\script as JS: DC Filter. Seems like GuitarML and NAM, now supported by Two Notes' GENOME are changing the 'guitar\bass recording' world for good!
Good to know about that reaper function! I might go back and train that model again. Yeah in hindsight I probably would have chosen 48k, seems to be a more standard recoding samplerate, and yes it’s pretty amazing that these open source projects are having a big impact on the music industry!
@@threepe0 You could train a 48k model that way (if you remove the checks in the training code and also swap the input wav file for a 48k version). To run it in real time, you'd have to modify the Proteus code to change the samplerate to 48k for processing, not sure how it's handled in Genome, but likely it wouldn't know to use 48k if you trained a model that way.
@@GuitarML How would I go about swapping the input wav file? Where is it stored? Thanks for your work by the way. I know you are kind of humble about your technical achievements but for me it is more the way you present your work in a comprehensible and calm manner that makes your work really helpful.
Thank you so much for your kind words! The input wav is stored in the training code repo: Automated-GuitarAmpModelling/Data/Proteus_Capture.wav After pulling this code into Colab in the first step you could navigate to this Data directory and swap it for a custom file with the same name. Note that this is in the “proteus-capture” branch, which the colab script checks out by default. Here is the full link on github: github.com/GuitarML/Automated-GuitarAmpModelling/tree/proteus-capture/Data
Okay - so much to say but I'll try to keep it short. Try... Firstly I think this is great in so many ways, but maybe most great is that you're open sourcing this at a time when money lust runs rampant in the world. You're making A LOT of people happy here and that's what karma is all about. Hats off! (that's how we used to say "Respect!") I watched your Fat Boost capture video and a couple things came to mind. I see in this video you've got a reamp/DI box, so that's one you've fixed on your own. I haven't studied all this yet but my guess is impedance matching counts. The other thing is: when you set the levels in the interface and DAW to capture the pedal output, it would make sense to me to do it with the pedal bypassed first, also checking at that point for noise buzzes, hums, and hiss, then enable the pedal and adjust the level from there. The DC offset you're seeing may just be a bit of latency, which I suppose would be visible on the whole waveform. In the cases I've seen, "normalization" seems to be implemented differently in different apps. Usually it's about raising levels, sometimes limiting the top. And finally - nice guitar playing. It's good to see a young guy focusing on playing clear notes with feeling rather than shredding. UA-cam is full of 10 year olds who can shred, but it's pretty rare to hear interesting riffs. Cheers
You don’t have to do any programming to follow what I do in this video, but if if you mean more polished capture software… I’d point you to paid solutions like NeuralDSPs Quad Cortex or IK’s ToneX. There might eventually be an upgraded solution for my stuff, but really all I’m doing here is clicking buttons.
Amazing. I was thinking, maybe the best way to match in-out volumes, is simply turning the pedal off and making sure both channels modulate alike. After it is matched, you can proceed with the pedal turned on!
hey there, This is amazing! I wanted to ask a question regarding the model. Do you feel the model is accurate enough at the knob settings it wasnt captured on? If no then is there a way to add multiple instances of the pedal at different settings and them combine together in a way to make the most accurate representation of the analog counterpart. Sorry if this sounds like a silly question but I'm planning to train some myself and who better to ask than you
Not a silly question, it’s actually one of the questions I get the most (and one that has been asked a ton on the NAM side). For this model, it is a “snapshot” model, so it really only accurately captures this one setting of the pedal. You can increase/decrease the input gain to have an effect to some degree, but not accurately model the dist knob. Check out my other videos on my channel for “knob” captures, and this does what you’re thinking of, modeling the full range of one or more knobs in a single model, by using multiple recordings of the pedal/amp at different settings. It’s more work, but effective if done correctly.
I think the model actually sounded better! Moving to the next level, is there any way to create a vst with knobs, etc? IOW, a dynamic version of the model.
For sure! I’ve modeled up to 4 knobs of a device, gain and three EQs. It’s more work because you have to record the device at multiple settings, but there are smart shortcuts to make the process manageable. Check out the “knob” capture videos on my channel, that shows how to capture the range of a single knob (typically gain/distortion) and run it in my Proteus plugin. I know the NAM side has also demonstrated this. Currently I don’t think the Proteus knob captures work in Genome.
If they added convolution reverb and made it all into a hardware unit thats pedalboard friendly, that would be insane. Also, if you own a copy of Wall of Sound, you probably got an email with a free license for Genome. I sort of wish Genome had a separate spot for pedal captures that could be used with an amp capture. Sounds really close though, good stuff.
hey Keith, I hope you read this message. I have a few questions to ask you, 1) what is the best way to check the accuracy of the trained audio vs the pedal (metrics etc) 2) Is there a way to toggle between the capture json files without swapping the files in file explorer in order to make it sounds like a fully fledged white box coded pedal? thanks in advance
Thanks for this, Keith. Is there any chance you could upload a tutorial video describing the complete 'guitar to computer to what the viewer hears"? Where the viewer is told how to optimise each stage in the link, and crucially, let the viewer hear each section's on/bypass. Importantly, where and how are you tapping in to get the final sound the viewer hears? The sounds we hear from your Genome uploads are truly fantastic, but unfortunately for me, they sound so much better than the much, much more uninspiring sounds I hear. I'm using a new AM5 PC platform (7950X CPU, 32GB RAM) with an RME Fireface 802. I plug my Beyerdynamic headphones (DT880 Pro) into my Behringer MS20 headphone socket. There is a difference if I plug directly into the Fireface (sounds brighter, better). Importantly, using this listening route (headphones into MS20), your UA-cam sounds are still far, far better than mine. I have this same issue with all of my plugins (Bias/S-Gear/Tone King etc - they all don't sound as good as the UA-cam videos). Hopefully I'm doing something daft that can be fixed.
It sounds great. While there are alot of software options, if you're struggling to find that one sound that best mimics your own actual setup, this can help find your sound. Now can we capture the sound of an accompanying horn section LOL?
Can I ask how you knew that the model you trained would run in Genome? Like do the models have the same characteristics or something? Did you work with Two Notes on this, or did you do some discovery to figure out what sort of model they were using?
Yes, they reached out to me last year and wanted to include GuitarML in Genome’s capabilities. That would be some crazy reverse engineering otherwise! There are so many different kinds and variations of neural nets that the slightest difference makes them incompatible.
@@GuitarML that would be some crazy reverse-engineering indeed! But honestly, watching the projects you've been involved in over the last few years, I wouldn't put it past you haha! Man, it's really exciting to see the Proteus stuff being pulled in to an official commercial product like this. It would be cool if the Proteus/NAM stuff will become kinda "the" standard format for amp captures, akin to how we look to IRs as "the" format for cab emulation. Whatever happens, I'm using and loving it, so I suppose that's most of what matters. But it is good to see someone's hard and interesting work progress like yours seems to have.
There is almost no difference between real and modeling sound. I am a programmer who loves guitar. I want to make something like this like you, but learning something new seems too difficult. I am so jealous of you.
If you understand the basic concepts of programming then thats really all you need to start learning audio DSP or machine learning. Trust me, if I had seen something like this 3 years ago it would have seemed totally impossible, but it’s really just the result of incremental learning, tinkering, and asking for help from people that know way more than I do!
I had to go get a degree in watching UA-cam videos to follow all the steps required to capture this pedal. I hope someone makes this process accessible to idiots… failing that I’m going to buy a Boss DS-1
Have I been reamping wrong? I go out of my interface headphone output into the guitar pedal then go out of the pedal into my reamp box. I then go out of the reamp box into the input of my audio interface.
Unfortunately yes, your pedal/amp reacts differently to a line level vs instrument level, you want to get it to instrument level prior to the amp/pedal, so the reamp should go before the device.
@@GuitarMLI still get good sounding captures doing it the way I have been but I'm going to try the way you have done on this video to see the difference.
I like this question because it made me stop and think, for me personally I find the technology fascinating, and it’s a lot of fun to share these captures with other people. There are so many options in the digital world for guitarists, but this lets you get a very specific sound of an amp/pedal, with a fairly high level of confidence it matches the real thing.
Childhood is fading the DS-1, Adulthood is admitting its more versatile than a Tube Screamer
Wow this is amazing!!! So cool the advances in guitar modeling technology
Man, you are a great engineer! Thank you for what you do! I would prefer 48kHz sampling though.
Oh, yes, you should have used the remove DC offset before rendering. I think Reaper has this function\script as JS: DC Filter.
Seems like GuitarML and NAM, now supported by Two Notes' GENOME are changing the 'guitar\bass recording' world for good!
Good to know about that reaper function! I might go back and train that model again. Yeah in hindsight I probably would have chosen 48k, seems to be a more standard recoding samplerate, and yes it’s pretty amazing that these open source projects are having a big impact on the music industry!
@@GuitarML one quick question: does going to 48khz require a model re-design, or if you select 48khz in training now is it all set?
@@threepe0 You could train a 48k model that way (if you remove the checks in the training code and also swap the input wav file for a 48k version). To run it in real time, you'd have to modify the Proteus code to change the samplerate to 48k for processing, not sure how it's handled in Genome, but likely it wouldn't know to use 48k if you trained a model that way.
@@GuitarML How would I go about swapping the input wav file? Where is it stored? Thanks for your work by the way. I know you are kind of humble about your technical achievements but for me it is more the way you present your work in a comprehensible and calm manner that makes your work really helpful.
Thank you so much for your kind words! The input wav is stored in the training code repo: Automated-GuitarAmpModelling/Data/Proteus_Capture.wav
After pulling this code into Colab in the first step you could navigate to this Data directory and swap it for a custom file with the same name. Note that this is in the “proteus-capture” branch, which the colab script checks out by default. Here is the full link on github: github.com/GuitarML/Automated-GuitarAmpModelling/tree/proteus-capture/Data
Okay - so much to say but I'll try to keep it short. Try...
Firstly I think this is great in so many ways, but maybe most great is that you're open sourcing this at a time when money lust runs rampant in the world. You're making A LOT of people happy here and that's what karma is all about. Hats off! (that's how we used to say "Respect!")
I watched your Fat Boost capture video and a couple things came to mind. I see in this video you've got a reamp/DI box, so that's one you've fixed on your own. I haven't studied all this yet but my guess is impedance matching counts.
The other thing is: when you set the levels in the interface and DAW to capture the pedal output, it would make sense to me to do it with the pedal bypassed first, also checking at that point for noise buzzes, hums, and hiss, then enable the pedal and adjust the level from there.
The DC offset you're seeing may just be a bit of latency, which I suppose would be visible on the whole waveform.
In the cases I've seen, "normalization" seems to be implemented differently in different apps. Usually it's about raising levels, sometimes limiting the top.
And finally - nice guitar playing. It's good to see a young guy focusing on playing clear notes with feeling rather than shredding. UA-cam is full of 10 year olds who can shred, but it's pretty rare to hear interesting riffs.
Cheers
Is there any easy to user (not programmer) sotware to make my own amp captures?
You don’t have to do any programming to follow what I do in this video, but if if you mean more polished capture software… I’d point you to paid solutions like NeuralDSPs Quad Cortex or IK’s ToneX. There might eventually be an upgraded solution for my stuff, but really all I’m doing here is clicking buttons.
Amazing. I was thinking, maybe the best way to match in-out volumes, is simply turning the pedal off and making sure both channels modulate alike. After it is matched, you can proceed with the pedal turned on!
Hi, thank you. The differencies comes from the input level for me, just check it out!
hey there, This is amazing! I wanted to ask a question regarding the model. Do you feel the model is accurate enough at the knob settings it wasnt captured on? If no then is there a way to add multiple instances of the pedal at different settings and them combine together in a way to make the most accurate representation of the analog counterpart. Sorry if this sounds like a silly question but I'm planning to train some myself and who better to ask than you
Not a silly question, it’s actually one of the questions I get the most (and one that has been asked a ton on the NAM side). For this model, it is a “snapshot” model, so it really only accurately captures this one setting of the pedal. You can increase/decrease the input gain to have an effect to some degree, but not accurately model the dist knob. Check out my other videos on my channel for “knob” captures, and this does what you’re thinking of, modeling the full range of one or more knobs in a single model, by using multiple recordings of the pedal/amp at different settings. It’s more work, but effective if done correctly.
I think the model actually sounded better!
Moving to the next level, is there any way to create a vst with knobs, etc? IOW, a dynamic version of the model.
Curious about this myself
For sure! I’ve modeled up to 4 knobs of a device, gain and three EQs. It’s more work because you have to record the device at multiple settings, but there are smart shortcuts to make the process manageable. Check out the “knob” capture videos on my channel, that shows how to capture the range of a single knob (typically gain/distortion) and run it in my Proteus plugin. I know the NAM side has also demonstrated this. Currently I don’t think the Proteus knob captures work in Genome.
If they added convolution reverb and made it all into a hardware unit thats pedalboard friendly, that would be insane. Also, if you own a copy of Wall of Sound, you probably got an email with a free license for Genome. I sort of wish Genome had a separate spot for pedal captures that could be used with an amp capture. Sounds really close though, good stuff.
using the headphone jack output into the reamp isn't that the wrong impedance etc?
Possibly, I might try the line out in the back of the Focusrite next time
hey Keith, I hope you read this message. I have a few questions to ask you, 1) what is the best way to check the accuracy of the trained audio vs the pedal (metrics etc)
2) Is there a way to toggle between the capture json files without swapping the files in file explorer in order to make it sounds like a fully fledged white box coded pedal? thanks in advance
Thanks for this, Keith.
Is there any chance you could upload a tutorial video describing the complete 'guitar to computer to what the viewer hears"? Where the viewer is told how to optimise each stage in the link, and crucially, let the viewer hear each section's on/bypass. Importantly, where and how are you tapping in to get the final sound the viewer hears? The sounds we hear from your Genome uploads are truly fantastic, but unfortunately for me, they sound so much better than the much, much more uninspiring sounds I hear.
I'm using a new AM5 PC platform (7950X CPU, 32GB RAM) with an RME Fireface 802. I plug my Beyerdynamic headphones (DT880 Pro) into my Behringer MS20 headphone socket. There is a difference if I plug directly into the Fireface (sounds brighter, better). Importantly, using this listening route (headphones into MS20), your UA-cam sounds are still far, far better than mine. I have this same issue with all of my plugins (Bias/S-Gear/Tone King etc - they all don't sound as good as the UA-cam videos). Hopefully I'm doing something daft that can be fixed.
It sounds great. While there are alot of software options, if you're struggling to find that one sound that best mimics your own actual setup, this can help find your sound.
Now can we capture the sound of an accompanying horn section LOL?
Can I ask how you knew that the model you trained would run in Genome? Like do the models have the same characteristics or something? Did you work with Two Notes on this, or did you do some discovery to figure out what sort of model they were using?
Yes, they reached out to me last year and wanted to include GuitarML in Genome’s capabilities. That would be some crazy reverse engineering otherwise! There are so many different kinds and variations of neural nets that the slightest difference makes them incompatible.
@@GuitarML that would be some crazy reverse-engineering indeed! But honestly, watching the projects you've been involved in over the last few years, I wouldn't put it past you haha!
Man, it's really exciting to see the Proteus stuff being pulled in to an official commercial product like this.
It would be cool if the Proteus/NAM stuff will become kinda "the" standard format for amp captures, akin to how we look to IRs as "the" format for cab emulation.
Whatever happens, I'm using and loving it, so I suppose that's most of what matters. But it is good to see someone's hard and interesting work progress like yours seems to have.
There is almost no difference between real and modeling sound. I am a programmer who loves guitar. I want to make something like this like you, but learning something new seems too difficult. I am so jealous of you.
If you understand the basic concepts of programming then thats really all you need to start learning audio DSP or machine learning. Trust me, if I had seen something like this 3 years ago it would have seemed totally impossible, but it’s really just the result of incremental learning, tinkering, and asking for help from people that know way more than I do!
Something is off, as the capture's waveform seems to be longer (by one tiny bar) than the training file.
Not a problem, the internal code trims the wav file to the proper length. Good catch though!
I had to go get a degree in watching UA-cam videos to follow all the steps required to capture this pedal. I hope someone makes this process accessible to idiots… failing that I’m going to buy a Boss DS-1
Great tutorial- thanks
Have I been reamping wrong? I go out of my interface headphone output into the guitar pedal then go out of the pedal into my reamp box. I then go out of the reamp box into the input of my audio interface.
Unfortunately yes, your pedal/amp reacts differently to a line level vs instrument level, you want to get it to instrument level prior to the amp/pedal, so the reamp should go before the device.
@@GuitarMLI still get good sounding captures doing it the way I have been but I'm going to try the way you have done on this video to see the difference.
Salut. Très cool ! Merci. ! 😇 !
Sounds good!
There are so many great plug-ins and amp pedals in DAWs...why bother?
I like this question because it made me stop and think, for me personally I find the technology fascinating, and it’s a lot of fun to share these captures with other people. There are so many options in the digital world for guitarists, but this lets you get a very specific sound of an amp/pedal, with a fairly high level of confidence it matches the real thing.
And yes, I know that profiles can't do time-based effects.... yet.