Create Studio Quality Vocal Stems using AI (Suno + Musicfy + Udio) - Tutorial
Вставка
- Опубліковано 27 вер 2024
- In this video I demonstrate a workflow using three different generative engines in order to create a studio quality vocal stem for use in professional music production.
Suno: suno.com/
Musicfy (affiliate): bit.ly/samusicfy
Udio.com: www.udio.com/
FL Studio: www.image-line....
soundcloud: /hvyofficial
x.com: x.com/hvymusic
spotify: open.spotify.c....
never thought ben shapiro teaching a dope ass ai tutorial
Top comment
Was just about to type that lol
Just found channel and was gonna post same thing 😂
Great, now I’ll never unhear that. Huge opportunity for some satirical music here!!!
This dude can make Ben Shapiro Ai vids. Undo all the damage he has caused. Hmm
Ben shapiro
Yes
Just wanted to say this content is incredibly helpful mate. Thank you so much for giving a thorough workflow demo, looking forward to more content :)
@ThatDelMarco you made my day thank you very much. I have a few videos ready to be edited. It's just hard to find the time. :(
The is the video, everyone creating lyrics AI or especially cleaning up your own vocals, should see, but you selfishly want few to see. Outstanding, you just gave some incredible knowledge here. And holy sheesh, that song was 🔥🔥🔥🔥🔥🔥🔥🔥🔥
I was just listing to the videos in the background... but then i totally had to take a second glance because I WAS SO SURE IT WAS BEN SHAPERIO! Gosh. Great video too! Thanks for sharing!
even if u do mistakes(or the ai tools) you go with it and try to adjust , im happy u kept it in the video.
I love using Suno, I use it for the majority of the music on my channel.
Suno is create at producing melodies and rhythms, but Udio gives more control over the fidelity of the output. Suno generates quicky, and Udio generates much slower. That's why they work so well together. You can bring your Suno generations into Udio and upres them using the remix feature.
@@SynthAlchemy I appreciate that, I am still learning Udio.
You can level up the quality of these noizy vocals by seperating them in Spectralayers into their components "Tonality", "Transitions" and "Noize". Then you can volume doen the "Noise Part about 30 %. And the vox sound less trashy. Just found out yesterday.
Damn man that's very interesting. I'm going to look more into it. Right now I use RX by iZotope for restoration. SplitX has some restoration tools as well but I haven't spent much time using them. SplitX is awesome, if you haven't checked it out yet I suggest you take a look.
La industria de la música va a revolucionando el mercado consumo y la inteligencia artificial
Awesome toturial man 💥
Ben Shapiro? Ohhhh...because of his voice? OK I get it, but hey that video was kick ass. Somethings I knew already and some things that I didn't. You just earned yourself a subscriber!
I am honored. Thank you very much.
1.5 speed for authentic Ben Shapiro experience (y)
lol
Ben Shapiro, the guy with eyebrows like “The Lorax”, I don’t see it.
Love the Ben voice
Has anyone ever said you could be a good Ben Shapiro impersonator, I'm not saying it to be rude, sorry if you take offense.
I changed my voice to Ben Shapiro's voice using AI.
The reason why there are no real "Stems" is that the AI don´t compose/produce like a human in seperate Tracks linear. Its like a "Film Set" of a Song. There arent really seperate tracks for the instruments cause there is no linear "recording" of an instrument. Its fake and an Illusion. The music is generatet frequency wise. Just a math thing. You hear this very clear with all the sound holes you get in the "instrumental stem" if the Vocals are removed. You don´t have these holes seperating a "real" song.
True! Except there are some generative engines that do produce song with stems. The best one that I'm aware of is called Soundful. It has some limitations as far as genres it can do, time signatures and the tempo options are rather limited. There's also no way to prompt. That being said it's extremely useful. Not only does the end user get a full set of stems to work with, but you can also download midi to edit and or change the instrument.
Where's the hacker spirit? How hard is it really to slightly change the lyrics so they're not detected as copyright'ed... I bet just adding a bunch of ohhh and ahhhs randomly would have sufficed...
Otherwise running the lyrics through Claude 3.5 with prompt: change two words in each line while keeping the general idea of the song the same would definitely pass the lyrics pohlice. But wasting our time with an irrelevant detour and not trimming it out of final vid even?? Really now....
Good point, but I had already rendered the song with those lyrics (before I realized they would be flagged for copyright infringement) and I needed the lyrics in Udio to match the lyrics in the song. Changing the lyrics would probably have caused some artifacts and hallucination on the part of Udio. I do want to test the limits of Udio's ability to correct lyrics though so I should look into that.
my theory is if you can't get a good sounding song with perfect vocals in ONE prompt.. your doing it wrong, if you have to clean up/master or edit an a.i generation you've created.. you've Failed... the whole concept of these sites is to create music with friends and family, these company's are there for social experimentation for us to create music that does NOT exist withough the need for ANY D.A.W, the whole idea of a.i generated music is eliminating the need for D.A.W because a.i music is generated on a d.a.w timeline anyway... so like i said.. "why fix something that's not broken" Udio is the only audio generator needed, suno is low quality full stop. mic drop.
@djnitro2024 I disagree. The Generations aren't good enough alone. Generations are easily identifiable when compared to professionally produced music. AI music people don't understand this because they don't have trained ears. People who know music production have more control over their music than people who just use AI. There's really no reason to choose not to learn production other than laziness. Truth is, most AI music people are just lazy. They want all the credit without putting in the work. Most AI music sucks.
have you cloned the voice of Ben Shapiro for your video and replaced yours with it?
Yes in fact.
Tutorials to make people believe using AI would have anything to do with actual work… 🤣🤣🤣🤣🤣
yeah lmao
I will never understand why anyone would prefer to use some AI generated slop
@@jake__lol climb from under your rock/ home
@@AICohesion that doesn't answer my question at all. If you love AI generated sounds then you do you. But it's lazy and not-creative at best.
@@jake__ imo the problem is people doing AI music who don't want to learn production. I'm in facebook groups with these people and they all consider themselves to be artists on par with real musicians. It's fucking WILD. I think if anyone is serious about music AI or not, they should be trying to perfect their art. No one wants to listen to music that's "good enough" and that's all AI music will ever be. It's up to humans to expand upon what AI is capable of and venture beyond the boundaries that currently exist.
Actually a lot of people do want to listen to music that's "good enough" but I think AI is going to saturate mediocore music to the point that mass audiences will seek original, authentic music.
The first one was the best 2:37
thanks for making this video
- founder of musicfy
@aribk thank you for your software! I didn't spend as much time in musicfy on this video (Udio's fault) as I meant to. I'll do you right on the next one.
@aribk by the way I would love to interview someone from Musicfy. Let me know if there's anything I can do to make that happen.
This is awesome ❤❤❤
suno soundet better than musicfy split
@@au5t1n17 i disagree.