Definitely! We might do a "no talking demo" in the future. Morpho also has a 7 day free trial of the entire public model library (currently 35 models) if you want to see what's possible.
Holy balls, this is spine tingling, absolutely loving this. Great quick n dirty demo that really activated the almonds. While the processing time is a bit on the long side (nothing for spontaneous sound design but you can't always have everything all at once) the potential of this thing came through in flying colors (or screaming demons).
This is truly cool. I would want to see and hear the results of a varied pool of training inputs. What was demonstrated with the single one (speech) here makes my imagination just completely explode. What happens when you train it on one percussionist/drummer in a room full of instruments vs training it on a single instrument? One person playing linear patterns on a piano versus two people playing 4 total unique parts? What happens when you train it on the animated psychedelic 1972 classic Fritz The Cat? Or a little chamber orchestra? Or a homestead yard scene with chickens clucking and pigs snorting and goats making whatever that sound is called? There should be some kind of app companion thing so people can easily do the sampling in the field rather that just at home. Making it an auv3 so people can use it in LoopyPro would also be rad.
Really fascinating! I am a huge Kate Bush fan and can't help but wonder, hope that she finds this and experiments with it some day. I just spent money on some music software, but this is definitely going on my list to try in the future.
The UA-cam comment section is probably not suitable for a deep discussion on this but I understand any wariness you might have with data. For now I would encourage you to look at our history - we have been very vocal about our stance on ethical model training and this permeates everything we do. Check out our blog, have a look at aiformusic.info, note how we publicly list and credit all training data sources for every model in the plugin's browser. I know talk is cheap and trust is hard to establish, so we ask you to judge us on our actions. Feel free to shoot us an email if you want to discuss this properly.
I’m not saying the example you gave is bad, but it doesn’t make me feel excited and inspired about the future of sound design. Just me? Hh (The plugin concept itself is exciting and inspiring of imagination. I want demos that match that. Otherwise, it’s just interesting.)
@@jrettetsohyt1 Hey no worries at all. For this video we wanted to focus on how sound design can be truly personal with Morpho, and show how anyone can make an interesting model with just their voice. That said, there is plenty of scope for more "out there" models applied to more boundary pushing music. We work with some brilliant artists who continue to surprise us with the ways they incorporate Morpho into their art. We're all ears if you have suggestions for models/demos/etc.!
Morpho does indeed support offline use. Local training is not a priority currently but we're listening to feedback on the matter. We have another free community plugin, Neutone FX, for those who want to experiment with training neural audio effects locally.
I like the concept, but there's an inherent problem with the way neural networks work, which is the sheer lack of definition in how you control sound. It's cool, but it feels like this would be more of a cool unpredictable effect to use in tandem with classical sampling than "the future of sampling". I think the control you have over the minutae of what is happening is part of what makes sound design techniques appealing.
Great comment. We also believe good parametrization is essential for the adoption of these new tools, and we know there are improvements to be made. Our internal research has been tackling this with a two-pronged attack: 1) Disentangling features such as pitch and instrument type within the architecture and 2) Exploring different UX and visualization techniques to condense the latent space into something more navigable. The feedback we get at this stage will greatly influence how we develop Morpho and other future plugins. As you mentioned, combinations of Morpho with traditional sampling are totally valid and very powerful. We don't want to replace but expand the options available.
@neutone_ai I look forward to seeing what you come up with, then! Also glad to see AI solutions that don't attempt to replace the artist but rather enhance their ability
For Mac users we recommend anything Apple Silicon - the base M1 chip is ample so long as the buffer size and sample rate settings are not too aggressive (48kHz sample rate, 2048 buffer size is a safe starting point). On Windows it is trickier to define, but an i7 or equivalent from 2020 onward should be comfortable running Morpho. You can download the free version of the plugin to check how it runs on your system.
This stuff will reconnect us to art as performance and a human creation I think - when everyone can churn out AI tracks like this it'll take art off a pedestal. So I guess this is positive! Still, can't understand wanting to use it!
We're not interested in AI tools that churn out things for you, and I'm sorry if this video gave you that impression. We think this is far more personal and creative than simply using sample libraries that someone else recorded for you. If you recorded all the training material yourself, and you implemented those sounds into your own music without any decisions such as melody or rhythm being made for you - what creativity has been lost?
with vocoflex, concatenator and now morpho, it has become obvious that neural sampling is indeed a new category of AI-powered sampling plugins, and an exciting one for sure :)
To be clear, you can absolutely train pitched models. It requires some additional considerations that we wanted to spare you for this introductory video, but it can be done. Personally I've had great fun playing guitar through a model trained on an opera singer!
I think to really get this off the ground, and in people’s tracks, the training needs to be somewhat unlimited. At this stage, no one knows what they’re going to get, and they should feel good about experimenting without the fear of results that are not usable. This is compounded by the three-day wait time. Disappointment can easily ruin a great concept. Unleash the early adopters…this is the way.
You raise an important point and it's one we've mulled over a lot internally. We don't want users to feel penalized for experimenting and I know firsthand what it's like when a model didn't come out exactly as planned. On the other hand, training costs us a lot in compute and giving away tokens would be giving away money. Our solution currently is to allow users one retry per token. Generally that's all you need to tweak or fix something in your dataset. We think it's a fair compromise and a gesture of goodwill given that this retry attempt comes out of our own pocket. Your feedback is very welcome though, we're still listening and learning!
We completely hear you. On the one hand we're stoked that it's even possible to have a neural network of this complexity running in real time on CPU, but when we're getting creative we want to throw on as many instances as possible. We're actively working on optimization and we hope to chip away at this in future updates. Thank you for your support!
Could you let us know what you found tricky about using Morpho? We really want to remove as many barriers as possible for artists to experiment with neural audio.
I think 29 dollars for 1 model Is a bit too much, in my opinion, something around 2.99 would be better. For instance I would buy at least 1, at 29 I would autocontrollo myself and I would not buy neither 1, because I'm too afrauid I could like it a lot and pend too much money one model after the other...
From the intro, I was hoping this was going to be something truly innovative like Synplant 2 or Visco. But disappointingly, this seems just snake oil and more bother than it's worth.
I feel like everything we heard here would be easier to just make manually. I mean the effects you used on your voice did a better job , why go through the trouble of training a plugin to give a less accurate and controlsble result,
Check out our free community plugin Neutone FX if you want to experiment with training your own neural audio effects locally. We have an open source SDK on our GitHub and tutorials to get started.
I got goosebumps when I just saw this concept.
I trust my instinct...it can be HUGE.
I cut and spliced my parents tapes using a 8mm film splicer as a kid .. then in the 80s I got this Korg DSS-1 sampler and went crazy :)
It would be fun with the option to make personal models public, similar to how poeple share Kontakt banks e.g.
This is actually the focus of our next update so it's lovely to hear it requested. We'll have more news soon!
It would also be nice to see videos demoing each of the models you have already trained. Thanks!
Definitely! We might do a "no talking demo" in the future. Morpho also has a 7 day free trial of the entire public model library (currently 35 models) if you want to see what's possible.
Holy balls, this is spine tingling, absolutely loving this. Great quick n dirty demo that really activated the almonds. While the processing time is a bit on the long side (nothing for spontaneous sound design but you can't always have everything all at once) the potential of this thing came through in flying colors (or screaming demons).
Haha, really glad to hear it resonated with you. Thank you for the support!
WOW this is really good and very unique, I like it very much.
Thank you for the kind words!
this is amazing. I just purchased! lets go.
Impressive, great work!
This is truly cool. I would want to see and hear the results of a varied pool of training inputs. What was demonstrated with the single one (speech) here makes my imagination just completely explode. What happens when you train it on one percussionist/drummer in a room full of instruments vs training it on a single instrument? One person playing linear patterns on a piano versus two people playing 4 total unique parts? What happens when you train it on the animated psychedelic 1972 classic Fritz The Cat? Or a little chamber orchestra? Or a homestead yard scene with chickens clucking and pigs snorting and goats making whatever that sound is called? There should be some kind of app companion thing so people can easily do the sampling in the field rather that just at home. Making it an auv3 so people can use it in LoopyPro would also be rad.
Really fascinating! I am a huge Kate Bush fan and can't help but wonder, hope that she finds this and experiments with it some day. I just spent money on some music software, but this is definitely going on my list to try in the future.
That would be brilliant.
Yeah... Give us your audio database and sit back and relax... And don't worry,your models are private AND IT WILL BE SAFE WITH US.
The UA-cam comment section is probably not suitable for a deep discussion on this but I understand any wariness you might have with data. For now I would encourage you to look at our history - we have been very vocal about our stance on ethical model training and this permeates everything we do. Check out our blog, have a look at aiformusic.info, note how we publicly list and credit all training data sources for every model in the plugin's browser. I know talk is cheap and trust is hard to establish, so we ask you to judge us on our actions. Feel free to shoot us an email if you want to discuss this properly.
3 to 5 days is wild
Great concept, but “the future of sound design” probably needs a more impressive example…
I’m not saying the example you gave is bad, but it doesn’t make me feel excited and inspired about the future of sound design. Just me? Hh
(The plugin concept itself is exciting and inspiring of imagination. I want demos that match that. Otherwise, it’s just interesting.)
@@jrettetsohyt1 Hey no worries at all. For this video we wanted to focus on how sound design can be truly personal with Morpho, and show how anyone can make an interesting model with just their voice. That said, there is plenty of scope for more "out there" models applied to more boundary pushing music. We work with some brilliant artists who continue to surprise us with the ways they incorporate Morpho into their art. We're all ears if you have suggestions for models/demos/etc.!
offline use or local training options?
Guessing there might be copyright issues with that
Morpho does indeed support offline use. Local training is not a priority currently but we're listening to feedback on the matter. We have another free community plugin, Neutone FX, for those who want to experiment with training neural audio effects locally.
I like the concept, but there's an inherent problem with the way neural networks work, which is the sheer lack of definition in how you control sound. It's cool, but it feels like this would be more of a cool unpredictable effect to use in tandem with classical sampling than "the future of sampling". I think the control you have over the minutae of what is happening is part of what makes sound design techniques appealing.
Great comment. We also believe good parametrization is essential for the adoption of these new tools, and we know there are improvements to be made. Our internal research has been tackling this with a two-pronged attack: 1) Disentangling features such as pitch and instrument type within the architecture and 2) Exploring different UX and visualization techniques to condense the latent space into something more navigable. The feedback we get at this stage will greatly influence how we develop Morpho and other future plugins.
As you mentioned, combinations of Morpho with traditional sampling are totally valid and very powerful. We don't want to replace but expand the options available.
@neutone_ai I look forward to seeing what you come up with, then! Also glad to see AI solutions that don't attempt to replace the artist but rather enhance their ability
All I want is local training as an option.
What would be a good minimum powered laptop that could handle this plug-in?
Then we’d know that anything above that would be safe. Thanks.
For Mac users we recommend anything Apple Silicon - the base M1 chip is ample so long as the buffer size and sample rate settings are not too aggressive (48kHz sample rate, 2048 buffer size is a safe starting point). On Windows it is trickier to define, but an i7 or equivalent from 2020 onward should be comfortable running Morpho. You can download the free version of the plugin to check how it runs on your system.
@@neutone_ai
How would it pergorm on m1 max 64ram,128buffer 48hz?
I would love to see AAX ver. developed as well!
Thanks! AAX is definitely on our radar. We'll let you know when we have a more solid timeline for release.
1.5 hr of recording followed by a five day wait to get a single guttural vocal sample? I like the idea but the implementation leaves me confused.
This stuff will reconnect us to art as performance and a human creation I think - when everyone can churn out AI tracks like this it'll take art off a pedestal. So I guess this is positive! Still, can't understand wanting to use it!
We're not interested in AI tools that churn out things for you, and I'm sorry if this video gave you that impression. We think this is far more personal and creative than simply using sample libraries that someone else recorded for you. If you recorded all the training material yourself, and you implemented those sounds into your own music without any decisions such as melody or rhythm being made for you - what creativity has been lost?
with vocoflex, concatenator and now morpho, it has become obvious that neural sampling is indeed a new category of AI-powered sampling plugins, and an exciting one for sure :)
Anything can be made percussive, but noise is not music. I'll look again when notes and chords are trained output.
To be clear, you can absolutely train pitched models. It requires some additional considerations that we wanted to spare you for this introductory video, but it can be done. Personally I've had great fun playing guitar through a model trained on an opera singer!
I think to really get this off the ground, and in people’s tracks, the training needs to be somewhat unlimited. At this stage, no one knows what they’re going to get, and they should feel good about experimenting without the fear of results that are not usable. This is compounded by the three-day wait time. Disappointment can easily ruin a great concept. Unleash the early adopters…this is the way.
You raise an important point and it's one we've mulled over a lot internally. We don't want users to feel penalized for experimenting and I know firsthand what it's like when a model didn't come out exactly as planned. On the other hand, training costs us a lot in compute and giving away tokens would be giving away money. Our solution currently is to allow users one retry per token. Generally that's all you need to tweak or fix something in your dataset. We think it's a fair compromise and a gesture of goodwill given that this retry attempt comes out of our own pocket. Your feedback is very welcome though, we're still listening and learning!
I love it. But its sadly so heavy on the cpu. I hope this will improve :) keep it up!
We completely hear you. On the one hand we're stoked that it's even possible to have a neural network of this complexity running in real time on CPU, but when we're getting creative we want to throw on as many instances as possible. We're actively working on optimization and we hope to chip away at this in future updates. Thank you for your support!
@@neutone_ai Would be great to utilize the GPU instead. But I´am sure if possible you consider this anyways.
ios pls
It would be nice not to have to do a PhD in Neutone every time you want to use it.
Could you let us know what you found tricky about using Morpho? We really want to remove as many barriers as possible for artists to experiment with neural audio.
I think 29 dollars for 1 model Is a bit too much, in my opinion, something around 2.99 would be better. For instance I would buy at least 1, at 29 I would autocontrollo myself and I would not buy neither 1, because I'm too afrauid I could like it a lot and pend too much money one model after the other...
From the intro, I was hoping this was going to be something truly innovative like Synplant 2 or Visco. But disappointingly, this seems just snake oil and more bother than it's worth.
Totally fine if this isn't for you, thanks for having a listen anyway!
come on
I feel like everything we heard here would be easier to just make manually.
I mean the effects you used on your voice did a better job , why go through the trouble of training a plugin to give a less accurate and controlsble result,
Oversold and underdelivered
Mouai, vous etes emballé pour un gadget à bruit. On sait faire tout ca depuis longtemps en bidouillant les plugins
No option to train the model locally ? Thats a HARD PASS from me.
Check out our free community plugin Neutone FX if you want to experiment with training your own neural audio effects locally. We have an open source SDK on our GitHub and tutorials to get started.
@@neutone_aia link to the github in your videos description would be appreciated 🤙🏾
Hey amazing!! The discord link in the desciption dosen't work and I would love to join :D
Oh no! Here is the code to paste directly into Discord if you're still having trouble: r6WwYCvJTS
скачала, попробовала, удалила. бред полнейший.