- 23
- 69 869
Neutone
Приєднався 6 жов 2022
Neutone makes AI technologies accessible for all to experiment with. You’ll find transformative AI audio instruments that will spark endless creative possibilities.
Neutone is a go-to platform for you to share real-time AI audio processing models with potential users in the audio production community.
Neutone is a go-to platform for you to share real-time AI audio processing models with potential users in the audio production community.
This TRAINABLE plugin is the future of sound design
Alfie takes on the Neutone Morpho plugin with a custom model trained on his voice. Learn how to create brand new textures with your own sounds in this step-by-step walkthrough.
Try Neutone Morpho: neutone.ai/
Join our Discord server: discord.com/invite/r6WwYCvJTS
0:00 - Introduction
1:37 - Training a custom model
3:59 - First impressions
7:32 - How do others react?
8:32 - Voice sound design
9:55 - Making some music
16:00 - Finished demo track!
Try Neutone Morpho: neutone.ai/
Join our Discord server: discord.com/invite/r6WwYCvJTS
0:00 - Introduction
1:37 - Training a custom model
3:59 - First impressions
7:32 - How do others react?
8:32 - Voice sound design
9:55 - Making some music
16:00 - Finished demo track!
Переглядів: 16 058
Відео
Train your own tone morphing models with Neutone Morpho
Переглядів 8582 місяці тому
The 1.1 update for Neutone Morpho is here, and with it comes our long awaited model training service. For the first time ever, the freedom to craft new tone morphing models is available to all in a convenient drag-and-drop interface. No code, experience or GPU required. You supply the sounds, and Morpho converts them into unique sonic textures that play like a brand new instrument. Get Neutone ...
Introduction to Tone Morphing with Neutone Morpho
Переглядів 6 тис.7 місяців тому
Introducing Neutone Morpho, our latest plugin that brings the power of tone morphing to your DAW. Get started: neutone.ai/morpho 0:00 - Morpho Examples 1:06 - Introduction to Neutone Morpho 3:09 - Technical Explanation 5:10 - Hands-on Demo Special thanks to REATMO reatmo
Neutone Morpho, real-time Tone Morphing plugin, has arrived!
Переглядів 14 тис.9 місяців тому
We’re pleased to present Neutone Morpho, a real-time tone morphing plugin, today. Our cutting-edge machine learning technology can transform any sound into something new and inspiring. neutone.ai/morpho Transform your voice to mimic the elegance of a violin, turn a coffee cup and pen into a rhythmic drum set, the possibilities are endless with our ever-expanding store of exclusive models. You c...
Neutone Morpho: Micro View + New Models!
Переглядів 1,8 тис.10 місяців тому
Today, we're highlighting our Micro View, which allows you complete control over your sound source, fine-tuning the subtlest of detail to your liking. We trained a model based off our very own @naotokui 's album, Mind The Gap, showing that Morpho is not just for imitating musical instruments, but can be used for ANY sound source! In the near future, we plan to create more "Artist" models in col...
Morphing Drums
Переглядів 6 тис.10 місяців тому
See how Neutone Morpho can morph incoming audio, in this case a drumloop, in real-time to various forms of output via our AI models! We will be making more and more models available to truly expand the horizons of what you can create very soon :)
Distant Echoes
Переглядів 42010 місяців тому
Composition created by Nao Tokui with Neutone Gen and Neutone Morpho. Visuals by Ryosuke Nakajima
Exploring Neutone Plugin: Taking Voice to Another Dimension
Переглядів 5 тис.Рік тому
Exploring Neutone Plugin: Taking Voice to Another Dimension
Neutone Tutorial - Custom Parameters Feature
Переглядів 10 тис.Рік тому
Neutone Tutorial - Custom Parameters Feature
So this is a Morphoder™
Super cool! Great to see creative examples of using these tools. Such a fun vibe. 🎉
Thanks! Cat't wait to hear what you create with Neutone! 😉
Nothing new in sound shit really.
So the response is always random? How do you drive this thing? The idea is interesting, though not new.
Virtually unusable due to cpu usage on my Intel Mac.
insane!
Needing to upload my data to some server is a big turn off to me. I don't want anyone to have my data, and I never believe promises of privacy. Even if the current server owners are honest, the company could be bought by someone else in the future who will not honor the previous owner's promises. Recording 1.5 hours of audio to get the effects also doesn't sound very appealing to me. Cut the requirement down to maybe a minute of audio and process it completely locally, without needing internet access and you might get my attention.
Thanks for your feedback. I have a couple of thoughts here that might not change your position but I think are worth mentioning anyway: 1. The audio you upload is deleted once training finished - we have no need for it once the model is ready. Of course this is important for privacy reasons but also for practical ones - it would be an unnecessary operational cost to keep terabytes of audio on our servers that aren’t being used. As a result, the scenario you described where we might be bought by another company with bad intentions is not possible as we no longer have your audio data. Your model is a neural network with frozen weights and cannot be changed or reverse engineered to retrieve the data used to train it. 2. The audio data you upload is not the kind of data that could be sold to advertisers or other similar third parties. Advertisers want to know what UA-cam videos you comment on because that helps them build a profile of you. Contextless audio files are not valuable to them in the same way. 3. Let’s imagine a scenario where we were indeed a company with bad intentions and we wanted to gather lots of audio for training other models. Would it not be easier for such a company to just scrape the web instead? Sadly music/audio are not rare or valuable commodities online. Ethical training is at the core of our company. Case in point - we specifically chose a Lovecraft book for this example model because his works are old enough to be in the public domain. We could have recorded some other more recent book but it felt more ethical to record something old and public. I hope this helps to highlight our position.
very cool would love to test it out
i like your company
so, let me get this straight, so you're selling a vst, then you're selling to us again a "trained" model, but we have to send it to you so you can get our "models" for free so then you can "sell" our models again?🤨
The trained model will be private and available only to you. We don't sell your custom model to anyone.
That piece sounds like it could be a bjork piece lol
This is cool it sounds like Bourne Identity experimental soundtrack
Ich traue Steinmeier nach langer Bekanntschaft nun wirklich alles zu, einen schlimmeren Bundespräsidenten kann man sich für ein Land wirklich nicht mehr wünschen! Top of the End!
Mixed feelings regarding this one. Great in terms of exploration & you can actually get some good sounds out of that, but if you have a specific idea in your mind or a specific feeling which you wanna put out - then old school style it is. Never underestimate the power of the creative mind/soul with a pair of skilled hands 😉 Music is all about the feeling at the end of the day, no matter the tools or tech ;)
True. It's meant to help you to explore new ideas.
looks like you guys have a great modern recording facility. Funny..well not funny but the difference between CLASSIC as compared to MODERN recording...looks great how and where audio is going.
Thats just insane! Id love to work with you guys :)
Any time!
Oh Yeah! Sounds like Yello. Can't remember which tune though...
This is insane.
9:00 This is going to be so useful for sound designers doing horror and scifi films. Anyone who needs to transform sounds in a way that sounds organic yet still fantastical. Most of my own work is fantastical world stuff, i can see this being useful. My concern is that in order to build models for it, there is reliance on your company to do the model computation. Which is fine now, but it can be hard to know how well the product will be supported in coming years. Any chance you might be able to make the training application available to people with a good GPU?
As an American spending half of my time in the UK the last few years this video cracks me up. For those unaware, Brits have a tendency to love "underwhelming presentations of things", their culture doesn't emphasize the "Hurry! Hurry! Step right up! See the amazing thing! You'll never believe your eyes!!!" showmanship that some of our cultures do (especially American), which might explain everyone's unimpressed reaction to this. To the Brits making the video: If Nikola Tesla was alive today and started a video in his mad scientist workshop claiming that electricity for mankind might be about to change forever due to his zappy new inventions, but then spent 16 minutes using the electricity to bake a potato in a less efficient way than simply baking a potato, you'd probably find the demonstration a bit under-representative of the claim at the start of it, yeah? Suddenly Tesla looks like a bit less than a potential genius and more like someone who stumbled on technology by accident or the effort of others and is now using it for the lulz of mundanity. Have a look at how the creators of the new "Concatenator" synth are promoting it. I don't know if that plugin is worth its salt, or even anything beyond an ineffective grift, but they've created a great set of "hook content" that shows their flavor of AI driven tech accomplishing something that people can't already easily make with, well, Audacity
DNA was first discovered in 1869 by Swiss scientist Friedrich Miescher. Yuri Gagarin, a Soviet cosmonaut, became the first human to travel into space on April 12, 1961. The picture you show of Pierre Schaeffer (along with Francois Bayleand and Bernard Parmegiani -- GRMC) was taken in 1972. And Peter Zinovieff invented the EMS Musys, in 1969 which is thought to be the first sampler. The Fairlight CMI,which was the first commercially used sampler, debuted in 1979. All this to say your comment that these things happened "before we discovered DNA or put a man in space" though figuratively creative, and good for making the point that sampling goes way back are factually incorrect... in the first case by more than a century.
Crick and Watson’s paper and famous sketch of the double helix date to 1953. Gargarin went to space in 1961. Schaeffer’s Etude Aux Chemins De Fer was composed in 1948. Halim El-Dabh’s The Expression of Zaar was composed even earlier in 1944. The date of the photo and the invention of sampling machines are surely irrelevant here? Yes maybe you could argue that we should have changed the word “discovered” to some variant of “understood” but we are really splitting hairs at that point in a video about music.
Could this reverse engineer any sound you feed it and then replicate it in a controllable way?
or any suggestions for plugins which can do this?
Synthplant
I’m concerned that it a) needs 3-5 days of processing of b) 1.5hrs of input? Smaller in and quicker would interest me + being on machine? JPMusic/Aotearoa
hipstershit
Phaseplant does this already, or is this different?
This reminds me of Synplant 2 - curious to check it out :)
Cool
definitely great for cinematic sound design and some niche music genres
This is truly cool. I would want to see and hear the results of a varied pool of training inputs. What was demonstrated with the single one (speech) here makes my imagination just completely explode. What happens when you train it on one percussionist/drummer in a room full of instruments vs training it on a single instrument? One person playing linear patterns on a piano versus two people playing 4 total unique parts? What happens when you train it on the animated psychedelic 1972 classic Fritz The Cat? Or a little chamber orchestra? Or a homestead yard scene with chickens clucking and pigs snorting and goats making whatever that sound is called? There should be some kind of app companion thing so people can easily do the sampling in the field rather that just at home. Making it an auv3 so people can use it in LoopyPro would also be rad.
All I want is local training as an option.
1.5 hr of recording followed by a five day wait to get a single guttural vocal sample? I like the idea but the implementation leaves me confused.
Holy balls, this is spine tingling, absolutely loving this. Great quick n dirty demo that really activated the almonds. While the processing time is a bit on the long side (nothing for spontaneous sound design but you can't always have everything all at once) the potential of this thing came through in flying colors (or screaming demons).
Haha, really glad to hear it resonated with you. Thank you for the support!
ios pls
Really fascinating! I am a huge Kate Bush fan and can't help but wonder, hope that she finds this and experiments with it some day. I just spent money on some music software, but this is definitely going on my list to try in the future.
That would be brilliant.
Oversold and underdelivered
I feel like everything we heard here would be easier to just make manually. I mean the effects you used on your voice did a better job , why go through the trouble of training a plugin to give a less accurate and controlsble result,
I cut and spliced my parents tapes using a 8mm film splicer as a kid .. then in the 80s I got this Korg DSS-1 sampler and went crazy :)
Awesome. I bet those DSS-1s go for a pretty penny these days!
@@neutone_ai If I hadn't sold it I would have a solid goldbar in that one :)
скачала, попробовала, удалила. бред полнейший.
I think 29 dollars for 1 model Is a bit too much, in my opinion, something around 2.99 would be better. For instance I would buy at least 1, at 29 I would autocontrollo myself and I would not buy neither 1, because I'm too afrauid I could like it a lot and pend too much money one model after the other...
what tuning is that lead guitar in? sounds great!
Yeah... Give us your audio database and sit back and relax... And don't worry,your models are private AND IT WILL BE SAFE WITH US.
The UA-cam comment section is probably not suitable for a deep discussion on this but I understand any wariness you might have with data. For now I would encourage you to look at our history - we have been very vocal about our stance on ethical model training and this permeates everything we do. Check out our blog, have a look at aiformusic.info, note how we publicly list and credit all training data sources for every model in the plugin's browser. I know talk is cheap and trust is hard to establish, so we ask you to judge us on our actions. Feel free to shoot us an email if you want to discuss this properly.
I think to really get this off the ground, and in people’s tracks, the training needs to be somewhat unlimited. At this stage, no one knows what they’re going to get, and they should feel good about experimenting without the fear of results that are not usable. This is compounded by the three-day wait time. Disappointment can easily ruin a great concept. Unleash the early adopters…this is the way.
You raise an important point and it's one we've mulled over a lot internally. We don't want users to feel penalized for experimenting and I know firsthand what it's like when a model didn't come out exactly as planned. On the other hand, training costs us a lot in compute and giving away tokens would be giving away money. Our solution currently is to allow users one retry per token. Generally that's all you need to tweak or fix something in your dataset. We think it's a fair compromise and a gesture of goodwill given that this retry attempt comes out of our own pocket. Your feedback is very welcome though, we're still listening and learning!
offline use or local training options?
Guessing there might be copyright issues with that
Morpho does indeed support offline use. Local training is not a priority currently but we're listening to feedback on the matter. We have another free community plugin, Neutone FX, for those who want to experiment with training neural audio effects locally.
It would be nice not to have to do a PhD in Neutone every time you want to use it.
Could you let us know what you found tricky about using Morpho? We really want to remove as many barriers as possible for artists to experiment with neural audio.
3 to 5 days is wild
Impressive, great work!
this is amazing. I just purchased! lets go.
Thank you very much!
Anything can be made percussive, but noise is not music. I'll look again when notes and chords are trained output.
To be clear, you can absolutely train pitched models. It requires some additional considerations that we wanted to spare you for this introductory video, but it can be done. Personally I've had great fun playing guitar through a model trained on an opera singer!
@@neutone_aiAgain, as an American to ye Olde Brits, with the greatest of respect, I highly recommend reformatting your approach to relating your developments / products to the outside world. To continue my last comments example about Tesla using Tesla coils to bake a potato, your response to this commenter, who has said "Mr. Tesla, I may invest in this when you show me it can do more than cook a lowly potato" and to that Mr. Tesla (you, in the metaphor) have replied "Oh trust me it totally can do much more than cook a potato, I just didn't want to show that in my video cus it's too smart for you simpletons to understand" As a yank familiar with how Brits word things, there are a lot of "ways of being humble and reserved" from Brit-to-Brit that come off condescending and "head up one's own behind" to people outside that culture. I have a whole theory as to why anything that gets promoted by Brits onto the internet (via written copy, scripts for marketing videos, demonstrations like this one) tends to do fine amongst onlookers in the UK but not so much with the north American market, and I think it's this approach, this cultural "presenting my pride and joy as almost good is good enough because trying to present my good thing as great would mean I'm secretly sh*te, m8!" because Brits culturally have this almost Japanese-woman-isn't-allowed-to-take-a-compliment level aversion to earnestly believing in something being great and presenting information about it as such.
you don't understand shit about real audio morphing! the vst is simply a harmonizer where you can lower the pitch and change the formant a little... then when you raise those few values on the screen they only act as an equalization associated with the formant (which is not a real formant). Go back to playing with your toy cars in the bedroom, it's better and don't mess around
No option to train the model locally ? Thats a HARD PASS from me.
Check out our free community plugin Neutone FX if you want to experiment with training your own neural audio effects locally. We have an open source SDK on our GitHub and tutorials to get started.
@@neutone_aia link to the github in your videos description would be appreciated 🤙🏾