Thanks for this great overview! I've been using Udio for a few days and Suno for about a week now. Both platforms have their strengths, and I'm impressed by how much I've been able to accomplish despite having no real musical talent. At 62, I'm definitely an 'old dog,' but I found these AI music tools surprisingly easy to use. I've managed to create some pretty good songs, which is amazing given my lack of musical background. The upload feature in Udio sounds particularly exciting - can't wait to try it out! It's fascinating to see how AI is making music creation accessible to everyone, regardless of age or experience.
As a studio engineer, composer and studio owner with over 40 years of experience, I have done thorough research with Suno and Udio. I loaded a studio mix into Udio, experimented with it, then returned to the mixing desk computer to compare the files. Although the composition was reasonable, the quality of the data was poor. I noticed a significant lack of high frequencies and dynamic range in the Udio-arranged piece. It serves more as a tool for composers and hobbyists to make music. However, the files are not suitable for production and must be re-recorded.
Super Valid! I’m (at best) at the apprentice level with audio engineering. Basically, I know just enough to get myself in trouble! But for sure, that “fuzz”’caused by the diffusion process is going to take a few more generations to stamp out. I think the larger overall issue is that (sadly) most people don’t notice. They’re listening on phone speakers or cheap earbuds. I mean, I don’t need to tell you! But, I think you nailed it on the composition point. I think it’s an interesting tool to load a work in progress tune into and get some ideas. Even if the results come back as something you AREN’T happy with, often the bad version will spark ideas you might never have considered.
Udio, Suno, or anyone really, needs to make a DAW or something similar (even if primitive) where you can generate AI music and it will individually place all the different parts of it into a file for you to edit. Imagine FL studio but you had suno generating music and you could individually change the piece of it like a real song you made from scratch!
I am SURE that someone is working on a deal. I have seen a few products and projects that seem to be looking to bridge that gap, but nothing official yet. Yet. I will say, loop sites like Splice? They're toast.
@@albertvargasUX Suno 3.5 is kind of a disappointment, but there are apparently some pretty big changes coming with 4.0. That's what I'm personally waiting for.
YES!! The separating out the individual tracks will end up being the real game changer! Being able to edit each instrument will change everything about AI music generation
Thank you, Tim for showing real production possibilities; experimenting with layering etc … and taking it well beyond the regular UA-cam thing of “I asked this AI music tool to write me a song about my rabbit and it was just AMAAYYZING!!”
For sure-- I didn't spend a lot of time futzing with the mix on any of these. Just basically tossed them into Ozone and let it do its thing. I think-- with some elbow grease, and probably way more mixing knowledge than I have-- a talented engineer/producer could probably really make AI tracks shine-- but-- again, that's hours of work right there.
Not 100% true some tracks you create with UDIO sound super clear and digital so it depends a bit most things i create with UDIO have a digital quality to it also i throw the audio file trough Ozone 11 to master the audio and make it even clearer :)
It depends what music you’re taking about. It’s not true the quality isn’t there for all music. And certainly isn’t true if you’re creatively using it for music production where you don’t even necessarily need to use it for the audio itself just the idea
Recalling the many days I would sit around with the band in the studio just noodling for ideas for our next song, I think this AI tool will really help with that time curve to shorten the noodle cycle. Then, as a Band with a basic idea for the song, we all take it to the next level.
THIS!!! SO MUCH THIS!!! Most "writing" sessions I've been in are usually Jam sessions. I was always the smart one and at least recorded them to see if there was anything good to mine out of them. But a REALLY amazing idea for something like Udio is when you're stuck on what comes after the good jam section (I've always called them Lego blocks) And the thing is: You don't HAVE to use what the AI Spits out, but I think just the fact that it'll provide something you can react to will spark something.
If you could upload a song, for example American Pie, but upload only the first half, and let udio continue it's own version of the 2nd half of the song, then download delete the first half, upload again and generate the first half you will then have a new song, in the style of American Pie, even sounding like it, but original
My pitch may be weird today but your "4 Chords" sounded like only 2 to me - Am flat and G flat then Am flat again, yes? That sounds absolutely crazy beautiful though OMG. Great content bro!
I'm not so sure that these AI music apps want us to be able to pull the stems apart or create derivative works out of what they make. My sense of it is that they want to be able to give us finished music pieces for our purposes, but not to be able to replace musicians and producers. It will be interesting to see who runs into this area though. Frankly I'm a little surprised that Udio let's us download WAVs.
Haha- You have studied the game, I see!! My other trick in this video was the really subtile "Don't forget to subscribe!" that was easter egg'd in! It's funny how the retention thing is actually creeping into my normal life. Like, I'm actually saying things like: "I'll get to that in one second, but first..." haha...thank god no one notices!
I spent more than 5000 credits on Udio what I think it's cool for those who have no skills, slightly frustrating for musicians because we would like to add an FX to change the notes etc... in places accurate. What excites me the most is that I sense an AI-boosted DAW software coming that would offer us a whole new way of creating, I dream of software where we just have to hum and then prompt the instrument that we want and then we could adjust the tone, vibrato etc... like on a real DAW. We already have acoustic production, production via software and that would simply bring a new way of creating.
Currently UDIO (latest update like 1 week ago) did cut many useful things (i.e. the features to rename songs and the song cover creator) so NO money from me for such unreliable policies. But the moment UDIO (or whatever app) offers: 1.) instruments parts adding / erasing / replacing part / track by track + 2.) full audio import (first of all for melody writing feature from vocals) + 3.) changing & translating lyrics (after the track is fully created) I will gladly pay whatever reasonable price they ask like up to $50/ month is absolutely legit! $90/month if I own the rights then to full legally bulletproof market the created music.
Talking to them soon. I’ll see what I can do regarding the inpainting features (by that, I mean pass the note along)- the legal stuff is obviously above my pay grade!
@@TheoreticallyMedia And yes, the remix of READY songs (not just the first take ca. 0:30 minutes) would be nice too. As well as to get rid of the current song limit of ca. 4:20 min.)
@@TheoreticallyMedia currently it's only somewhat "possible" (the only way / workaround) as REMIX with value "1" (the "mutation" % slider all the way hard to the left) to post change the lyrics AND main prompt AFTER the "initial take" creation (that 1st 0:30 min.) - in order to get the vocals in a different language also with different lyrics (or halfway in a different language - often with either half song sung in the currently chosen / half in prevous language or plain mixed up with gibberish or sung with an accent - which can be artistically an asset in itself, but a hit or miss at this point - whereas either way that's not really can be considered as translated / adapted vocals & lyrics)
@@TheoreticallyMedia also please more name / tag variations (currently only 1 "name" per song - which currently cannot be renamed anymore, the song renaming feature would be nice to be brought back - at least when the song is done and ready to be published - otherwise currently effectively no publishing possible) as well as there should be a better "distinguishability" between the takes AND their each and every increment / successive variations than only by the current green, pink and blue tags, each in black and white version) which makes it currently rather hard to distinguish between all the threads
As a professional musician, I do hope it allows for inspiration for folks who are exploring their own expression, but it feels kinda like training wheels imho. Thanks again Tim for keeping us in the loop!
100%! At the very least, it’s a new toy we can play with. I think it is a neat idea to upload music you’re working on just to get some new ideas if you’re stuck!
I did a track starting with my vocals and music, and it replicated my voice and music, using the same style, for the remainder of the song almost perfectly. It was astounding.
I really don’t think it’s going to replace anyone. Creative and charismatic humans are on a whole other level. I see it as a tool though, killing the samples market, not the human market.
Yup! 1000%. That’s kind of what I was getting at toward the end with that feedback intro. Something I never would have thought of. Even the way it took my main riff in the first example and added some zing to it. So fascinated by the idea of writing something and then hearing an AI say “try this”
@@TheoreticallyMedia Many of these services are illegal and currently under investigation. They trained without permission and the labels aren't gonna blindly accept that. It is also incredibly destructive to many chains of the creative industries, so while it might seem all fun and light. The reality is this service should exist, but they should pay for the data they illegally train on. There is nothing impressive about theft. Anyone will be able to do it in a year or two - and then what?
@@mattc3510 Painters were always disembodied from their work. Music was, is and always will be an intimate relationship between the listener and the musician. Is it a fluke that looks play an important role in a singer's success? What was the last time you heard a song that you REALLY liked and didn't search for the artist?
Great video... but... LOL... yes of course "In My Arms" (Your guitar under 'Mayer' vocal) was always going to be "Mid" when your guitar track was in completely the wrong key. I'd be interested to see how well it turns out if you retry it with that fundamental error corrected.
Haha, that was my bad. I recorded the guitar/bass/drums based off the key of the original sample. I’ll say even still, I like what Udio did with the riff. Just little adjustments with staccato and phrasing. It was really interesting .
Wow, really impressed that it stacked and aligned so well; stacking in photography and astro is a thing, now for music too. Been waiting for this feature in Udio; so many possibilities.
Been playing with Udio for a while. Absolutely insane! Haven't found a good tutorial on inpainting on Discord or elsewhere. It's vague. Please help if it's something enough people want. There are nowhere _near_ enough credits if we're _really_ experimenting. Also, there's a 'Common Commands in Lyrics' page that can do wonders. eg. [Build-up), [Chorus], [Instrumental] and many, many more. There's a HUGE difference if you input something recorded on a webcam versus a studio mic for the audio input. Loving Udio. Not loving the lack of credits and lack of real customer support. Thanks Tim. 👍
Actually talking with them this week about a full deep dive tutorial. I agree, there is a lot of fog in terms of instruction, so my plan is to really dig in with the devs to do a comprehensive tutorial. Stay tuned!
It looks like Suno is testing a similar feature, only theirs looks like you can add your own audio as a layer and it will build a track around it which in my opinion is far more useful than udio’s tool. A couple of UA-camrs have tested it and posted results.
Yup! If you watch until the end, I do mention that I reached out to Suno to see if I could get a beta and make a video about it. They have not responded. haha..alas...
Looking forward to the Suno comparison if they bring in something similar soon. If you could do a step by step method that would be helpful as I found this a little rushed and it is quite different from the previous version
For sure. This has been a general UA-cam problem and I’m looking to solve it with a community space soon. The problem here is that I have to balance a tutorial with retention. So, sometimes I have to move fast, but still stay basic. It’s an odd tightrope. Hoping to get that squared away this summer. Build a little playground where I/we can all be a little looser.
5:29 Wow Wow Wow!!! ❤️ Weeks ago, I was dreaming of and talking to a friend about how awesome it would be if music AI tools would eventually give users the capability of being able to guide the AI with input audio similar to how we can do image prompting in MJ for example..... and now just several weeks later, 💥 , this exact feature being highlighted in this video (audio guiding in Udio) is a reality! To me, this is game changing technology. I'm not a musician at all yet this capability has me as excited about the near future as SORA has me excited. This is big!! 👏 👏 👏
this is a really cool feature BUT i really want them to add a “STYLE REFERENCE” feature like midjourney instead… let me feed in a song i like and generate a completely new similar styled song… not a cloned version of the original song i cant actually use cause of copyright issues. that would basically perfect these tools for me.
10:06 WOW! I'm very impressed once again....... the quality is astonishingly good. It's actually mind blowing to think how good the quality already is and yet we're basically just emerging from the infancy of AI music. "What a time to be alive!!" 👍
My romance with Udio is dying. It is either buggy or I don’t know what I’m doing. Here is how I try to create a song in Udio: Add my prompt with appropriate tags. First time around, I let Udio generate the song and lyrics After Udio generation, I click the song title to enter the area with the large Cover Icon. Here I click edit to change the lyrics. After changing the lyrics, I click save. Udio tells me that the lyrics have been updated. I click play and Udio play the original lyrics - ignoring my lyrics, or it creates brand-new lyrics but still ignoring my lyric changes or even worse it starts singing in gibberish So next I try to remix a few times with fingers crossed Udio will start using the lyric changes I made. NOPE-no such luck. Any feedback, thoughts, and suggestions is welcomed. I haven’t checked if Udio has a UA-cam account. Proper documentation is as important as the software itself. There are tons of UA-cam videos about Udio but also lots of inconsistencies. Need proper UA-cam videos from the creators. Thanks for any thoughts in advance. p.s. depending on what you want done, there may be several Udio work flows. If anyone has a good handle on Udio work flows, please post on UA-cam and let us know. At the moment I would want to know a work flow for straight forward creation of a song and another work flow when you need to edit (lyrics, singer, medley, etc)
I’m talking with them to do a full comprehensive tutorial. Like, getting into the deep end with the devs to make sure that every aspect is buttoned up and correct. Stay tuned!
Really interesting video. The one question that I never see an answer to is where do these engines get their samples from? Are they using Kontakt libraries in the background? If it's loops, where are these generated from? We all know the issues with Splice and sample clearance. These are really important things to know if you ever want to use any outputs in your projects.
so, I don't know the full back end of their tech, but hypothetically, it's running a Diffusion process-- meaning that it takes the whole of the training data, and then creates a super fuzzy "image" (I'm going to speak in visual terms here, but the concept is the same) that almost resembles old school TV static. The system then starts to hallucinate based on your prompts, and almost like when you're seeing shapes in clouds, it forms a picture. Or in our case, a bassline, drum beat etc. Hypothetically, it should be altering the training data so much in the process that it doesn't resemble the source-- although, we have obviously seen that to not be foolproof. Although, I'll also say that this is a larger problem within copyright/music publishing (see Katy Perry Lawsuit)-- Western music is basically 12 notes and, like, 5 chord progressions (at least in pop)-- so, you're bound to get repetition. Side note: I stopped using Splice after getting hit w/ a copyright violation for a song I used as background music on the channel. It was an original composition, minus one Splice Drum loop. I could have argued it, but-- I don't know, it was early enough in the channel that I just decided to take the hit and learn the lesson.
@@TheoreticallyMediabrilliant explanation and makes perfect sense. In theory that means not only is music on the AI agenda but in essence sample libraries. This has some major shock waves depending on the outcome of multitude of court cases.
I'm really enjoying the newest features on Udio. I took a song I made on there before the crop and inpaint features were added, then cropped out the bits of that original that worked and I've almost made a full album with a really consistent voice and genre because they all come from the same "base". My only other wish apart from isolating and editing different tracks would be a "stitch" or removal feature, for example there might be something in the middle of a generation that I want to remove to go straight into the next part of the song, and I'm happy with the end of the generation I want to be able to clip out a few seconds and have udio stitch them together nicely. I could probably do this with some tools offline and then have udio inpaint the sections together nicely but a) I'm not a music editor and b) I want to be able to do it all in one system.
Maybe, it's U (you) + DIO (that, in italian lang., means God)... YOU GOD... instead, I think SUNO derives from SUONO (that, in italian lang., means SOUND)... I think...😅
It’s the one thing they’ve all lacked. The ability to “cast” a singer for a cohesive album. Kitbashing like the Mayer Method (can we name it that?) is a temp solve- but it won’t be long until we’ve got an official version.
Don't forget that we already have the techniques to replace any voice in a song with another. You could basically produce an album with Udio (having different voices on each track) and then replace the voices on all of the songs with one single voice. If you use something like ACE Studio or Emvoice for the final voice you could even change melodic and lyrical details as well as mix the voice separately. I never tried this approach and I guess there's some things that won't work as well as you'd like (for example when you have multiple voice melody lines at the same time in the original Udio song). But I see no reason why this wouldn't work conceptually.
Damn that was insane!! can't wait for multi layers, and also text/voice to musical changes. "Add a fat base and drums, change to a key.. , make it 5 mins long" etc
@@TheoreticallyMedia that is so funny, I met PalKid about a year ago and he & I have made 2 tracks together so far. We made lofi covers of "lit" from a silent voice, and "lyra's theme" from pokemon heartgold & soulsilver. He really is a great dude!
"Made" is a bit of a stretch. Someone typed for a couple of seconds, and out came some zeros and ones they really did not much to affect, and they called it "an album".
As a kid in Sweden in the 70s-80s that John Mayer gibberish is how we pretended to speak English. 😅 Udio brings me back everyday. 🤗 On the flip side I did spend around a hundred generations trying to give such a gibberish song actual lyrics, and did eventually succeed, but man did that turn out to be not much fun after, say, generation 80 of the _same_ thing..! 😰 It would have been easier trying to make gibberish a genre of it's own.
Haha, speaking of Sweden-- my kids (for some reason) have recently glommed onto the Mamma Mia soundtrack-- so...there's been a LOT of Abba being played around the house lately. Not gonna lie, Abba kinda slaps.
@@TheoreticallyMedia I actually had a recent generation very obviously 'inspired' by ABBA. It's sometimes very clear what Udio is trained on. I also got 100% Bonnie Tyler at one time..! But yes, there are some very good ABBA songs. 😊
@@missoats8731 Probably. I was more referring to the amount of work it took would be analogues to inventing a genre and give it traction, make it viral. There is a Swedish 'artist' called Eilert Pilarm who can't speak English, but sings Presley songs. That's pretty much a genre too.
Same how I have been using it so far... Generate new stuff then I remove that original audio and have it generate that part as well and in the end there is nothing original left.
Tim, An AI stem separation feature was just released in the new Logic Pro 11. Also, have you seen the Sora competitor called Kling from China? Of course it's waitlisted and only available to people in China, but the sample videos are on par with Sora.
Yup to both! Actually I've been considering switching over to Logic from Ableton. I'm still on Ableton 11, and I didn't see a lot in 12 that made me want to upgrade. Seems like they might be losing their edge. Kling and Vidu look reall amazing...BUT...I do wonder if/when we get them!
Neat tool, and i think I might take a crack at it with a Concept Song I made for a short film I am working on. I made one using Suno and other than having the audio defects of AI generated music, I liked the results. Enough that I could get past the defects and enjoy the song. Thank you for sharing.
There’s a ton of people out there who want to be recognized as musicians without putting in the work to get there. There’s a huge audience for this sadly and streaming platforms are about to be flooded with AI music that people will take credit for, making genuine art and human expression impossible to discern in an ocean of machine “content”.
@@mrnelsonius5631 Kind of a bleak outlook we face, especially considering the copyright infringement happening when the "machine" is learning. Of course we humans have a choice in creating the machines and consuming the content, so we could change the outlook. If not, couldn't that be considered true AI - artificial ignorance?
@@michaelfaethIt’s not infringing when it is training. It’s already been a precedent for years now that training a model is fair use. The only thing that can infringe is the end user. And that’s always the weakest link for any technology.
There are always two sides to a new tech. Yes the act of creating music will be devalued, trivialized. On the brights side, everyone will be able to make the music they want (with enough patience), not just the 0.00001%. It's the culmination of the democratization of musical creation.
@@davidvincent380so no more mozarts because any loser can "generate" fantastic music. What a sad state for humanity. There will be no human artists to celebrate. Everything will be either fully generated, or tainted by AI.
11:51 YOU NEED to adjust the voice effect to something like Dead and Buried beginning by Stone Temple Pilots. Like a distorted radio echo effect. Would suit the sound so much. Instantly imagined that when listening to your song.
If you want Udio to copy the melody and voice characteristics of the uploaded clip, make sure that the clip features the same melody - like a verse - twice with variations (i.e. preferably not the same clip glued back to back, but two distinct renderings by a human or a robot), and then provide Udio with the same exact lyrics sung twice already, with the prompt [melody repeats] (off course the twice repeated melody must not extend the context length Udio uses). It wont't reproduce the voice every time, but it will eventually. It my take some clipping and extending for it to reproduce the whole sample. When it does, you can input lyrics for another verse, preceded by [melody repeats], and Udio should be able to continue with the voice and melody it has learned. It will proably introduce some variations to both, but many of the the initial characteristics should still be there. For giving the vocals backing you can then introduce some keywords both in the prompt, and in the lyrics window, like so [melody repeats: (fe)male vocalist, guitar, drums]. Again, this won't work every time, though cranking up prompt and lyrics strength works. If the results don't sound the best at high enough settings, you can always extend them further on lower ones, with the same verse. When the verse is rendered in a satisfactory way, just clip the whole section before that while extending the next time, for it to be later replaced with an intro.
oh my, I have tons of started song projects lying on my hard drive... I´m currently using Udio alot to make some stuff with my lyrics... didn´t know they added that upload feature... everything in combination... hello future ;)
@@TheoreticallyMedia You think? I see that everybody hates the platform, but it actually pays much better than UA-cam. I mean, I got around 150 bucks every month with the same 20+ years old songs and I'm no Micheal Jackson :D It's not much money, but it is what ADs actually generate.
The audio quality isn't there yet, but I'm sure it'll eventually get there. It will need less and less from you to generate something better than anything you can produce in a reasonable time scale. A big component of its improvement is the music that other producers will feed it. I would've loved an AI that allows me to directly interact with my DAW. Imagine a melody and hearing it play back, thinking "no," and imagining a different sounding fourth note, for example. Imagine EQ'ing the low end until I no longer hear the resonant bass-all of this while just looking at the screen with an EEG-type device on your head. Haha. I mean, that's an AI I would love to see. That's something that can unleash MY internal creativity with authenticity. All it needs to do is interpret brain waves and maybe eye-track to allow for a seamless flow of creation that I control every second of. But I imagine we're far from that, it's basically brain / machine interface. Instead, we have this: something that sort of delegates the process of creation to an external, separate entity that takes language as input. I'm wondering if this is the wrong direction we're taking concerning the use of AI in music. A lot of my personal choices when I'm making music (just a hobbyist) aren't even verbal; it's more like a feeling or intuition of something sounding right in a certain context. Which explains my wish for something that bridges the gap between the ethereal and ungraspable aspect of intuition /feeling, and the DAW. That's where the musical or art renaissance lies, in my view of course.
Hmmmmm. You might be onto something. Haha, I also noticed at one point you can see the file name when I popped it into Udio, where it says Jordan_Tim. Haha, so that might also be a clue!
Cool! Thanks for the tutorial. There goes the weekend. Again! Side note: I think you might have been playing guitar in a different key than the vocals. Or John Mayer was off key. Yes, it was his fault.
Thank you! I surround myself with super insanely talented musicians, so I kinda consider myself mid at best-- but, it's always nice to hear that I might kinda hold my own!
A food for thought, why not take the generated track as a reference so it can be mapped out with different samples? You know: import it into a DAW then use VSTs, synthesizers, and whatnot over the AI track to make it sound more amazing and closer to the rhythm of the said track itself.
I see that you have disabled the youtube embed feature. I would have liked to include this video on my website free education section, which would have introduced your channel to new people. If you decide to change this feature back to the standard sharable setting, please let me know.
I've been making Psalms into metal songs for a bit now. I put them in a playlist titled "Swords of Zion [Psalms]". They're not perfect (some are better than others 😅, but I'm working with the free version, so no inpainting), but still pretty impressed! I even made a silly metal song about bible passage about the kids who made fun of Elisha's bald head and got mauled by bears for their trouble 😆 (at by brother's request)!
Haha, the Bible is GREAT Metal material. I mean, one of the greatest metal songs of all time, Creeping Death, is right out of it! Ahhh..man, now I gotta go listen to it...haha
Can I upload an entire instrumental song and have udio add lyrics? I have a ton of instrumental tracks that I did a while back but I can’t sing or write lyrics to save my life.
I believe so. You should be able to upload your track and then "inpaint" vocals on them. If you're looking for a specific voice like yours (or Mayers) you'll have to go about the way I detailed it in the video. Seems like there is a lot of workarounds here though! Need to experiment more!
I don't know why it drives me crazy when people mispronounce it as "ooh-dee-oh" to the point I can't sit through a video hearing it over and over again. It's pronounced "you-dee-oh", guys.
Haha, Matt Wolfe told me that as well. In my first Udio video (when no one knew how to pronounce it) I said it both ways, but pointed out that Wu-Tang clan calls the "Studio" the "Ohh-Dee-Oh" so I'm sticking with that!
Wow indeed! So great to hear Suno is coming out w similar as I def prefer that platform, for myself I’ve gotten much more useable songs out of. All just bananas how fast the Ai tools keep advancing
AI is awesome for making shit that basic people love. If you have no personality or point of view then AI music is wonderful for you. I've been completely unable to create a track that fits in with my personal asthetic.
Oh, don't get me wrong-- I listen to a LOT of weird stuff that you likely can't prompt for...BUT, I do think that in our case, AI can serve as some interesting building blocks. Generate some sounds/songs-- and then mangle them up in post production and use them as elements in original music. Right out of the box? Well-- you get those generic lyrics that ChatGPT provided me. I mean, they're fine-- but they're also kind of wallpaper. (Sadly-- MOST people are into wallpaper music...)
UDIO to midi stems is what I'm after. Then we can fill in the blanks of our never-finished tracks. I tried UDIO > Stem separation > Midi export > Ableton Live > VST, but that hasn't lead to anything clean. Too many instruments and samples get lumped into one stem. Then I tried RipX DAW to clean up the AI-generated stuff to no avail. It's basically the same problem. Anyone an idea how to proceed?
I want to upload an entire track in less than stellar quality (mostly from cassettes) and have it reproduce them like they were done in a professional recording studio. I may have to pay the fee to test out what is possible.
YOUR WELCOME I SUGGESTED THIS TO THEM IN DISCORD LITERALLY WITHIN A WEEK (I ASSUMED A YEAR OR MORE TO ACTUALLY MAKE THIS I CAN'T IMAGINE AI IN A YEAR NOW) I have been wanting this forever and i double triple qaudruple checked It was me who suggested this ... which I personally see this and its release day being the most important thing in music history of all time / useful invention since thumbs. Your welcome from Jokrtherapper
Excellent!! Uhhh, can you go over to the discord and suggest that you and I win next big powerball lottery? =) Haha, but seriously: agreed! Can’t wait to see what things look like in a year! If we’re here already, can you even imagine?
Haha, I don't know if you follow the channel Drumeo (Drummer focused channel, and although I don't play drums, I LOVE it)-- but they just released a video of a young drummer named Nandi Bushell recording a version of Holy Diver, having never heard it before. She kills it! It's an adorable video! High recommend!
I'm 100 percent positive that both udio and suno will allow for track separation before the year is over. I bet my third nipple on it.
I'll pop a gimpy toe in on that bet as well!
Your "Superfluous Papilla" Monsieur Scaramanga?
Google Lyria can already do it, but being Google they still haven’t released it
i bet they get sued before this happens, wheels are in motion so get your handkerchiefs ready
The new Logic Pro 11 features an AI stem separator that is very good.
Thanks for this great overview! I've been using Udio for a few days and Suno for about a week now. Both platforms have their strengths, and I'm impressed by how much I've been able to accomplish despite having no real musical talent. At 62, I'm definitely an 'old dog,' but I found these AI music tools surprisingly easy to use. I've managed to create some pretty good songs, which is amazing given my lack of musical background. The upload feature in Udio sounds particularly exciting - can't wait to try it out! It's fascinating to see how AI is making music creation accessible to everyone, regardless of age or experience.
As a studio engineer, composer and studio owner with over 40 years of experience, I have done thorough research with Suno and Udio. I loaded a studio mix into Udio, experimented with it, then returned to the mixing desk computer to compare the files. Although the composition was reasonable, the quality of the data was poor. I noticed a significant lack of high frequencies and dynamic range in the Udio-arranged piece. It serves more as a tool for composers and hobbyists to make music. However, the files are not suitable for production and must be re-recorded.
Super Valid! I’m (at best) at the apprentice level with audio engineering. Basically, I know just enough to get myself in trouble!
But for sure, that “fuzz”’caused by the diffusion process is going to take a few more generations to stamp out.
I think the larger overall issue is that (sadly) most people don’t notice. They’re listening on phone speakers or cheap earbuds. I mean, I don’t need to tell you!
But, I think you nailed it on the composition point. I think it’s an interesting tool to load a work in progress tune into and get some ideas. Even if the results come back as something you AREN’T happy with, often the bad version will spark ideas you might never have considered.
sadly, 99% music consumers a.k.a. the buyer / end product market majority won't notice nor do they care.
The average people don't care, tbh
There's an AI app that remasters any song too.
@@JayKillXRP What are you referring to?
Udio, Suno, or anyone really, needs to make a DAW or something similar (even if primitive) where you can generate AI music and it will individually place all the different parts of it into a file for you to edit.
Imagine FL studio but you had suno generating music and you could individually change the piece of it like a real song you made from scratch!
I am SURE that someone is working on a deal. I have seen a few products and projects that seem to be looking to bridge that gap, but nothing official yet.
Yet.
I will say, loop sites like Splice? They're toast.
most likely FL Studio/Ableton will just hire their own team to build an internal tool that copies udio/suno's functions
There is an an online daw that doesn’t exactly that. Waves studio
Isn't that sort of what AIVA is?
Would be great to have a plugin for this.
Track separation is quite popular, what I'm waiting for is MIDI outpot for separated tracks. :)
Oh, right! That too!! The holy grails of AI music!
This is what I've always dreamed of, I have so many pieces started that I could never finish.... now I can
we all do my friend
I am about to upgrade my subscription for this. Waiting on Sunos response before taking the leap
I did subscribe and can confirm that it works really well. At least for unlocking new ideas for your tracks.
@@albertvargasUX Suno 3.5 is kind of a disappointment, but there are apparently some pretty big changes coming with 4.0. That's what I'm personally waiting for.
make sure you give the AI half the check
YES!! The separating out the individual tracks will end up being the real game changer! Being able to edit each instrument will change everything about AI music generation
Suno starts 'upload fuction' too for early access member., In some ways, Suno is even better. It's truly an amazing world.
I did reach out to Suno (mentioned at the end of this video), but I haven’t heard back! Would love to cover them on the channel!
Thank you, Tim for showing real production possibilities; experimenting with layering etc … and taking it well beyond the regular UA-cam thing of “I asked this AI music tool to write me a song about my rabbit and it was just AMAAYYZING!!”
I love seeing people trained in the arts experiment with AI.
i have so much fun playing with video and 3D AI tools but it's like quicksand or something too
the stacked version is actually pretty darn good!
I really want to keep stacking more-- just to see how insane it can get!
Agreed. Was totally getting into it.
The audio quality is not there yet, I hope they'll focus on that next, because that is the most important to actually use it in production.
For sure-- I didn't spend a lot of time futzing with the mix on any of these. Just basically tossed them into Ozone and let it do its thing.
I think-- with some elbow grease, and probably way more mixing knowledge than I have-- a talented engineer/producer could probably really make AI tracks shine-- but-- again, that's hours of work right there.
Not 100% true some tracks you create with UDIO sound super clear and digital so it depends a bit most things i create with UDIO have a digital quality to it also i throw the audio file trough Ozone 11 to master the audio and make it even clearer :)
It depends what music you’re taking about. It’s not true the quality isn’t there for all music. And certainly isn’t true if you’re creatively using it for music production where you don’t even necessarily need to use it for the audio itself just the idea
@@TheoreticallyMediaOR you could just do it yourself.
Recalling the many days I would sit around with the band in the studio just noodling for ideas for our next song, I think this AI tool will really help with that time curve to shorten the noodle cycle. Then, as a Band with a basic idea for the song, we all take it to the next level.
THIS!!! SO MUCH THIS!!! Most "writing" sessions I've been in are usually Jam sessions. I was always the smart one and at least recorded them to see if there was anything good to mine out of them.
But a REALLY amazing idea for something like Udio is when you're stuck on what comes after the good jam section (I've always called them Lego blocks)
And the thing is: You don't HAVE to use what the AI Spits out, but I think just the fact that it'll provide something you can react to will spark something.
If you could upload a song, for example American Pie, but upload only the first half, and let udio continue it's own version of the 2nd half of the song, then download delete the first half, upload again and generate the first half you will then have a new song, in the style of American Pie, even sounding like it, but original
Yup! Man, this is the kind of stuff that is going to be incredible in terms of breaking writers block!
You can inpaint even now , so it's doable !
Ay yi yi. All those die hard fans of Prince, Led Zepp, MJ, etc, will make their own postmortem hits. Wow😅... Not that I'm doing that or anything... 😳
Btw: Sunos similar feature won't release for another 2 months.
@@TheoreticallyMediawriters block doesnt exist. Stop promoting this shit.
My pitch may be weird today but your "4 Chords" sounded like only 2 to me - Am flat and G flat then Am flat again, yes? That sounds absolutely crazy beautiful though OMG. Great content bro!
I'm not so sure that these AI music apps want us to be able to pull the stems apart or create derivative works out of what they make. My sense of it is that they want to be able to give us finished music pieces for our purposes, but not to be able to replace musicians and producers. It will be interesting to see who runs into this area though. Frankly I'm a little surprised that Udio let's us download WAVs.
UDIO is developing new things as we speak :) im sure that part will also come ASAP
Suno supports wav download as well
Why wouldn't they? They want people to use it and be creative with it, so restricting anything doesn't help them at all!
oh man, your really starting to master the hooks and "stay tuned for this..." retention techniques
Haha- You have studied the game, I see!! My other trick in this video was the really subtile "Don't forget to subscribe!" that was easter egg'd in!
It's funny how the retention thing is actually creeping into my normal life. Like, I'm actually saying things like: "I'll get to that in one second, but first..."
haha...thank god no one notices!
I spent more than 5000 credits on Udio what I think it's cool for those who have no skills, slightly frustrating for musicians because we would like to add an FX to change the notes etc... in places accurate. What excites me the most is that I sense an AI-boosted DAW software coming that would offer us a whole new way of creating, I dream of software where we just have to hum and then prompt the instrument that we want and then we could adjust the tone, vibrato etc... like on a real DAW. We already have acoustic production, production via software and that would simply bring a new way of creating.
Currently UDIO (latest update like 1 week ago) did cut many useful things (i.e. the features to rename songs and the song cover creator) so NO money from me for such unreliable policies.
But the moment UDIO (or whatever app) offers: 1.) instruments parts adding / erasing / replacing part / track by track + 2.) full audio import (first of all for melody writing feature from vocals) + 3.) changing & translating lyrics (after the track is fully created) I will gladly pay whatever reasonable price they ask like up to $50/ month is absolutely legit! $90/month if I own the rights then to full legally bulletproof market the created music.
Talking to them soon. I’ll see what I can do regarding the inpainting features (by that, I mean pass the note along)- the legal stuff is obviously above my pay grade!
@@TheoreticallyMedia I dunno, if I did carry my point across well, but many thanks in advance!
@@TheoreticallyMedia And yes, the remix of READY songs (not just the first take ca. 0:30 minutes) would be nice too. As well as to get rid of the current song limit of ca. 4:20 min.)
@@TheoreticallyMedia currently it's only somewhat "possible" (the only way / workaround) as REMIX with value "1" (the "mutation" % slider all the way hard to the left) to post change the lyrics AND main prompt AFTER the "initial take" creation (that 1st 0:30 min.) - in order to get the vocals in a different language also with different lyrics (or halfway in a different language - often with either half song sung in the currently chosen / half in prevous language or plain mixed up with gibberish or sung with an accent - which can be artistically an asset in itself, but a hit or miss at this point - whereas either way that's not really can be considered as translated / adapted vocals & lyrics)
@@TheoreticallyMedia also please more name / tag variations (currently only 1 "name" per song - which currently cannot be renamed anymore, the song renaming feature would be nice to be brought back - at least when the song is done and ready to be published - otherwise currently effectively no publishing possible) as well as there should be a better "distinguishability" between the takes AND their each and every increment / successive variations than only by the current green, pink and blue tags, each in black and white version) which makes it currently rather hard to distinguish between all the threads
As a professional musician, I do hope it allows for inspiration for folks who are exploring their own expression, but it feels kinda like training wheels imho. Thanks again Tim for keeping us in the loop!
100%! At the very least, it’s a new toy we can play with. I think it is a neat idea to upload music you’re working on just to get some new ideas if you’re stuck!
So, why does it sound like it's playing through a broken old radio speaker?
I did a track starting with my vocals and music, and it replicated my voice and music, using the same style, for the remainder of the song almost perfectly. It was astounding.
I really don’t think it’s going to replace anyone. Creative and charismatic humans are on a whole other level. I see it as a tool though, killing the samples market, not the human market.
Yup! 1000%. That’s kind of what I was getting at toward the end with that feedback intro. Something I never would have thought of.
Even the way it took my main riff in the first example and added some zing to it. So fascinated by the idea of writing something and then hearing an AI say “try this”
@@TheoreticallyMedia Many of these services are illegal and currently under investigation. They trained without permission and the labels aren't gonna blindly accept that. It is also incredibly destructive to many chains of the creative industries, so while it might seem all fun and light. The reality is this service should exist, but they should pay for the data they illegally train on. There is nothing impressive about theft. Anyone will be able to do it in a year or two - and then what?
I personally can’t find almost artist that can make images as good as AI. The same will likely happen with music.
@@mattc3510 Painters were always disembodied from their work. Music was, is and always will be an intimate relationship between the listener and the musician. Is it a fluke that looks play an important role in a singer's success? What was the last time you heard a song that you REALLY liked and didn't search for the artist?
@@mattc3510 i feel the inverse. i have never seen AI make images as good as great artists can
Suno has an upload feature. I’ve been using it on some of my older songs I created. It’s amazing.
Great video... but... LOL... yes of course "In My Arms" (Your guitar under 'Mayer' vocal) was always going to be "Mid" when your guitar track was in completely the wrong key. I'd be interested to see how well it turns out if you retry it with that fundamental error corrected.
Haha, that was my bad. I recorded the guitar/bass/drums based off the key of the original sample. I’ll say even still, I like what Udio did with the riff. Just little adjustments with staccato and phrasing. It was really interesting .
Great video Tim. See you at the DQ!
Wow, really impressed that it stacked and aligned so well; stacking in photography and astro is a thing, now for music too. Been waiting for this feature in Udio; so many possibilities.
It's very interesting to see a creative person, using these AI tools constructively. More projects and tutorials like that, thank you.
Been playing with Udio for a while. Absolutely insane! Haven't found a good tutorial on inpainting on Discord or elsewhere. It's vague. Please help if it's something enough people want. There are nowhere _near_ enough credits if we're _really_ experimenting. Also, there's a 'Common Commands in Lyrics' page that can do wonders. eg. [Build-up), [Chorus], [Instrumental] and many, many more. There's a HUGE difference if you input something recorded on a webcam versus a studio mic for the audio input. Loving Udio. Not loving the lack of credits and lack of real customer support. Thanks Tim. 👍
Actually talking with them this week about a full deep dive tutorial. I agree, there is a lot of fog in terms of instruction, so my plan is to really dig in with the devs to do a comprehensive tutorial. Stay tuned!
It looks like Suno is testing a similar feature, only theirs looks like you can add your own audio as a layer and it will build a track around it which in my opinion is far more useful than udio’s tool. A couple of UA-camrs have tested it and posted results.
REALLY wish you could share a YT link of one of these test sessions.
Yup! If you watch until the end, I do mention that I reached out to Suno to see if I could get a beta and make a video about it. They have not responded. haha..alas...
@@TheoreticallyMedia whoops my bad, Tim. I missed that part 😬
Looking forward to the Suno comparison if they bring in something similar soon. If you could do a step by step method that would be helpful as I found this a little rushed and it is quite different from the previous version
For sure. This has been a general UA-cam problem and I’m looking to solve it with a community space soon. The problem here is that I have to balance a tutorial with retention. So, sometimes I have to move fast, but still stay basic. It’s an odd tightrope.
Hoping to get that squared away this summer. Build a little playground where I/we can all be a little looser.
5:29
Wow Wow Wow!!! ❤️
Weeks ago, I was dreaming of and talking to a friend about how awesome it would be if music AI tools would eventually give users the capability of being able to guide the AI with input audio similar to how we can do image prompting in MJ for example..... and now just several weeks later, 💥 ,
this exact feature being highlighted in this video (audio guiding in Udio) is a reality!
To me, this is game changing technology. I'm not a musician at all yet this capability has me as excited about the near future as SORA has me excited.
This is big!! 👏 👏 👏
this is a really cool feature BUT i really want them to add a “STYLE REFERENCE” feature like midjourney instead… let me feed in a song i like and generate a completely new similar styled song… not a cloned version of the original song i cant actually use cause of copyright issues. that would basically perfect these tools for me.
Ooooooh, style reference would be cool!
Heck, I would settle for a /describe feature to fine tune my prompts!
10:06
WOW! I'm very impressed once again....... the quality is astonishingly good.
It's actually mind blowing to think how good the quality already is and yet we're basically just emerging from the infancy of AI music.
"What a time to be alive!!" 👍
6:53
Stroke?
😆 🤣 😂 alwsys appreciated your humorous commentary!
My romance with Udio is dying. It is either buggy or I don’t know what I’m doing. Here is how I try to create a song in Udio:
Add my prompt with appropriate tags.
First time around, I let Udio generate the song and lyrics
After Udio generation, I click the song title to enter the area with the large Cover Icon. Here I click edit to change the lyrics.
After changing the lyrics, I click save. Udio tells me that the lyrics have been updated.
I click play and Udio play the original lyrics - ignoring my lyrics, or it creates brand-new lyrics but still ignoring my lyric changes or even worse it starts singing in gibberish
So next I try to remix a few times with fingers crossed Udio will start using the lyric changes I made. NOPE-no such luck.
Any feedback, thoughts, and suggestions is welcomed. I haven’t checked if Udio has a UA-cam account. Proper documentation is as important as the software itself. There are tons of UA-cam videos about Udio but also lots of inconsistencies. Need proper UA-cam videos from the creators.
Thanks for any thoughts in advance.
p.s. depending on what you want done, there may be several Udio work flows. If anyone has a good handle on Udio work flows, please post on UA-cam and let us know. At the moment I would want to know a work flow for straight forward creation of a song and another work flow when you need to edit (lyrics, singer, medley, etc)
I’m talking with them to do a full comprehensive tutorial. Like, getting into the deep end with the devs to make sure that every aspect is buttoned up and correct. Stay tuned!
What I would love to see is being able to use your past generations as a style guide for future generations.
Really interesting video. The one question that I never see an answer to is where do these engines get their samples from? Are they using Kontakt libraries in the background? If it's loops, where are these generated from? We all know the issues with Splice and sample clearance. These are really important things to know if you ever want to use any outputs in your projects.
so, I don't know the full back end of their tech, but hypothetically, it's running a Diffusion process-- meaning that it takes the whole of the training data, and then creates a super fuzzy "image" (I'm going to speak in visual terms here, but the concept is the same) that almost resembles old school TV static.
The system then starts to hallucinate based on your prompts, and almost like when you're seeing shapes in clouds, it forms a picture. Or in our case, a bassline, drum beat etc. Hypothetically, it should be altering the training data so much in the process that it doesn't resemble the source-- although, we have obviously seen that to not be foolproof. Although, I'll also say that this is a larger problem within copyright/music publishing (see Katy Perry Lawsuit)-- Western music is basically 12 notes and, like, 5 chord progressions (at least in pop)-- so, you're bound to get repetition.
Side note: I stopped using Splice after getting hit w/ a copyright violation for a song I used as background music on the channel. It was an original composition, minus one Splice Drum loop. I could have argued it, but-- I don't know, it was early enough in the channel that I just decided to take the hit and learn the lesson.
@@TheoreticallyMediabrilliant explanation and makes perfect sense. In theory that means not only is music on the AI agenda but in essence sample libraries. This has some major shock waves depending on the outcome of multitude of court cases.
I'm really enjoying the newest features on Udio. I took a song I made on there before the crop and inpaint features were added, then cropped out the bits of that original that worked and I've almost made a full album with a really consistent voice and genre because they all come from the same "base". My only other wish apart from isolating and editing different tracks would be a "stitch" or removal feature, for example there might be something in the middle of a generation that I want to remove to go straight into the next part of the song, and I'm happy with the end of the generation I want to be able to clip out a few seconds and have udio stitch them together nicely. I could probably do this with some tools offline and then have udio inpaint the sections together nicely but a) I'm not a music editor and b) I want to be able to do it all in one system.
Great video, keep up the great work Tim 👍
Super appreciate it! Thank YOU!
I'm convinced now that it's pronounced "YOU-dio" because their own system pronounces it this way if you include the name in a song.
Maybe, it's U (you) + DIO (that, in italian lang., means God)... YOU GOD... instead, I think SUNO derives from SUONO (that, in italian lang., means SOUND)... I think...😅
Certainly looking forward to have a self-created suite of artists whose voices can be reproduced in different songs
It’s the one thing they’ve all lacked. The ability to “cast” a singer for a cohesive album.
Kitbashing like the Mayer Method (can we name it that?) is a temp solve- but it won’t be long until we’ve got an official version.
Don't forget that we already have the techniques to replace any voice in a song with another. You could basically produce an album with Udio (having different voices on each track) and then replace the voices on all of the songs with one single voice. If you use something like ACE Studio or Emvoice for the final voice you could even change melodic and lyrical details as well as mix the voice separately. I never tried this approach and I guess there's some things that won't work as well as you'd like (for example when you have multiple voice melody lines at the same time in the original Udio song). But I see no reason why this wouldn't work conceptually.
4:30
Wow!!!! Extraordinary sound! LOVE LOVE LOVE this DAW compilation 💜
It's crazy right? I think we will see a certified AI Generated song break out into the pop music world this year. It's almost too good not to.
@TheoreticallyMedia
Truly mind blowing tech and you're probably right on this! Wow! Wow! Wow!
Damn that was insane!!
can't wait for multi layers, and also text/voice to musical changes. "Add a fat base and drums, change to a key.. , make it 5 mins long" etc
Woah dude, you collaborated with Palkid?! I love his music
I did! It was a few years ago now, but yeah! Good dude and a stellar musician!
@@TheoreticallyMedia that is so funny, I met PalKid about a year ago and he & I have made 2 tracks together so far. We made lofi covers of "lit" from a silent voice, and "lyra's theme" from pokemon heartgold & soulsilver. He really is a great dude!
A collective of artists made an entire album with udio. They are called “The Unseen Analog”.
oh, I hadn't heard that-- I'll check it out for sure!
Not good.
"Made" is a bit of a stretch. Someone typed for a couple of seconds, and out came some zeros and ones they really did not much to affect, and they called it "an album".
@@Zactivistthe collective wrote all the lyrics.
@@DJVARAOthen why are you here watching a video about Udio if you think the music made by it isn’t any good?😂
As a kid in Sweden in the 70s-80s that John Mayer gibberish is how we pretended to speak English. 😅 Udio brings me back everyday. 🤗
On the flip side I did spend around a hundred generations trying to give such a gibberish song actual lyrics, and did eventually succeed, but man did that turn out to be not much fun after, say, generation 80 of the _same_ thing..! 😰
It would have been easier trying to make gibberish a genre of it's own.
Haha, speaking of Sweden-- my kids (for some reason) have recently glommed onto the Mamma Mia soundtrack-- so...there's been a LOT of Abba being played around the house lately.
Not gonna lie, Abba kinda slaps.
@@TheoreticallyMedia I actually had a recent generation very obviously 'inspired' by ABBA. It's sometimes very clear what Udio is trained on. I also got 100% Bonnie Tyler at one time..!
But yes, there are some very good ABBA songs. 😊
Isn't there a genre called "mumble rap" where you can't understand the lyrics? Sounds like Udio would be perfect for that 😅
@@missoats8731 Probably. I was more referring to the amount of work it took would be analogues to inventing a genre and give it traction, make it viral.
There is a Swedish 'artist' called Eilert Pilarm who can't speak English, but sings Presley songs. That's pretty much a genre too.
this is really what I think AI should be used, it helps enhance your creativity, great demonstration!
I like it the other way around, I choose a track that I vibe with and I build on that till the point that I made it my own
Same how I have been using it so far... Generate new stuff then I remove that original audio and have it generate that part as well and in the end there is nothing original left.
Maybe using Suno generations to feed into Udio is the safest way to play with this.
Ohhhh...I can't wait to try that! Great idea!
But why use Suno when you can literally use any audio? Including Udio generations?
@@Edbrad copyright issues?
If there would be a feature to download stems in wav of your own creations…. that would be 🤩
That's the holy grail right now. Udio IS working on it though. We'll see it...I'd say in a few months?
it's nice when the actual musician using Suno/udio ,
How did you upload the vocal sample onto Udio? I can’t find the option to do that.
Tim,
An AI stem separation feature was just released in the new Logic Pro 11.
Also, have you seen the Sora competitor called Kling from China? Of course it's waitlisted and only available to people in China, but the sample videos are on par with Sora.
Yup to both! Actually I've been considering switching over to Logic from Ableton. I'm still on Ableton 11, and I didn't see a lot in 12 that made me want to upgrade. Seems like they might be losing their edge.
Kling and Vidu look reall amazing...BUT...I do wonder if/when we get them!
Tim... that's kinda Epic. How about you edit that stacked mix and put it in Spotify.
Neat tool, and i think I might take a crack at it with a Concept Song I made for a short film I am working on.
I made one using Suno and other than having the audio defects of AI generated music, I liked the results. Enough that I could get past the defects and enjoy the song.
Thank you for sharing.
Continuum Drift may become my new mobile's ringtone
Oh, that is SUCH a good idea!
Is it unreasonable to think that the best way to keep making music is to use only organic intelligence?
There’s a ton of people out there who want to be recognized as musicians without putting in the work to get there. There’s a huge audience for this sadly and streaming platforms are about to be flooded with AI music that people will take credit for, making genuine art and human expression impossible to discern in an ocean of machine “content”.
@@mrnelsonius5631 Kind of a bleak outlook we face, especially considering the copyright infringement happening when the "machine" is learning. Of course we humans have a choice in creating the machines and consuming the content, so we could change the outlook. If not, couldn't that be considered true AI - artificial ignorance?
@@michaelfaethIt’s not infringing when it is training. It’s already been a precedent for years now that training a model is fair use. The only thing that can infringe is the end user. And that’s always the weakest link for any technology.
There are always two sides to a new tech. Yes the act of creating music will be devalued, trivialized.
On the brights side, everyone will be able to make the music they want (with enough patience), not just the 0.00001%. It's the culmination of the democratization of musical creation.
@@davidvincent380so no more mozarts because any loser can "generate" fantastic music. What a sad state for humanity. There will be no human artists to celebrate. Everything will be either fully generated, or tainted by AI.
11:51 YOU NEED to adjust the voice effect to something like Dead and Buried beginning by Stone Temple Pilots. Like a distorted radio echo effect. Would suit the sound so much. Instantly imagined that when listening to your song.
If you want Udio to copy the melody and voice characteristics of the uploaded clip, make sure that the clip features the same melody - like a verse - twice with variations (i.e. preferably not the same clip glued back to back, but two distinct renderings by a human or a robot), and then provide Udio with the same exact lyrics sung twice already, with the prompt [melody repeats] (off course the twice repeated melody must not extend the context length Udio uses). It wont't reproduce the voice every time, but it will eventually. It my take some clipping and extending for it to reproduce the whole sample. When it does, you can input lyrics for another verse, preceded by [melody repeats], and Udio should be able to continue with the voice and melody it has learned. It will proably introduce some variations to both, but many of the the initial characteristics should still be there.
For giving the vocals backing you can then introduce some keywords both in the prompt, and in the lyrics window, like so [melody repeats: (fe)male vocalist, guitar, drums]. Again, this won't work every time, though cranking up prompt and lyrics strength works. If the results don't sound the best at high enough settings, you can always extend them further on lower ones, with the same verse. When the verse is rendered in a satisfactory way, just clip the whole section before that while extending the next time, for it to be later replaced with an intro.
eyyy was cool to see you play el-guitar, AI is great but it's still amazing to see real playing
Groovy . . . Kitbashers Unite !
The Voices just are not 100% there yet but feature for instrumentals might be here
2:39 PALKID 🔥⛽🔥⛽
Haha. Good dude!! Really enjoyed working with him!
oh my, I have tons of started song projects lying on my hard drive... I´m currently using Udio alot to make some stuff with my lyrics... didn´t know they added that upload feature...
everything in combination... hello future ;)
Good news it that with 11.000 plays you actually made at least 5 dollars, probably more. :)
Ahhhh, but don’t forget: this was a collab! So, really: $2.50. Then, minus platform fees, more like a dollar.
Sigh. Such a terrible system.
@@TheoreticallyMedia
You think?
I see that everybody hates the platform, but it actually pays much better than UA-cam.
I mean, I got around 150 bucks every month with the same 20+ years old songs and I'm no Micheal Jackson :D
It's not much money, but it is what ADs actually generate.
Oh shit Tim rockin out!
The audio quality isn't there yet, but I'm sure it'll eventually get there. It will need less and less from you to generate something better than anything you can produce in a reasonable time scale. A big component of its improvement is the music that other producers will feed it.
I would've loved an AI that allows me to directly interact with my DAW. Imagine a melody and hearing it play back, thinking "no," and imagining a different sounding fourth note, for example. Imagine EQ'ing the low end until I no longer hear the resonant bass-all of this while just looking at the screen with an EEG-type device on your head. Haha. I mean, that's an AI I would love to see. That's something that can unleash MY internal creativity with authenticity. All it needs to do is interpret brain waves and maybe eye-track to allow for a seamless flow of creation that I control every second of.
But I imagine we're far from that, it's basically brain / machine interface. Instead, we have this: something that sort of delegates the process of creation to an external, separate entity that takes language as input. I'm wondering if this is the wrong direction we're taking concerning the use of AI in music. A lot of my personal choices when I'm making music (just a hobbyist) aren't even verbal; it's more like a feeling or intuition of something sounding right in a certain context. Which explains my wish for something that bridges the gap between the ethereal and ungraspable aspect of intuition /feeling, and the DAW. That's where the musical or art renaissance lies, in my view of course.
Hmmm that piano snippet sounded a lot like the wizard himself 🤔 . As a DT fan, that would be totally rad
Hmmmmm. You might be onto something. Haha, I also noticed at one point you can see the file name when I popped it into Udio, where it says Jordan_Tim. Haha, so that might also be a clue!
@@TheoreticallyMedia I'm so jealous!! Having JR on this channel (twice no less) was NOT on my bingo card for 2024 🤣
Cool! Thanks for the tutorial. There goes the weekend. Again! Side note: I think you might have been playing guitar in a different key than the vocals. Or John Mayer was off key. Yes, it was his fault.
Haha. Yes, Mayer was in the wrong key. Haha, I knocked that out in 10 minutes. We’ll correct it when we do the AI John Mayer live tour!
you are so good at guitar , wow
Thank you very much. I had some guitar classes in high school.
Thank you! I surround myself with super insanely talented musicians, so I kinda consider myself mid at best-- but, it's always nice to hear that I might kinda hold my own!
A food for thought, why not take the generated track as a reference so it can be mapped out with different samples?
You know: import it into a DAW then use VSTs, synthesizers, and whatnot over the AI track to make it sound more amazing and closer to the rhythm of the said track itself.
I see that you have disabled the youtube embed feature. I would have liked to include this video on my website free education section, which would have introduced your channel to new people. If you decide to change this feature back to the standard sharable setting, please let me know.
MUSIC IS THE KEY TO AGI
I've been making Psalms into metal songs for a bit now. I put them in a playlist titled "Swords of Zion [Psalms]". They're not perfect (some are better than others 😅, but I'm working with the free version, so no inpainting), but still pretty impressed! I even made a silly metal song about bible passage about the kids who made fun of Elisha's bald head and got mauled by bears for their trouble 😆 (at by brother's request)!
Haha, the Bible is GREAT Metal material. I mean, one of the greatest metal songs of all time, Creeping Death, is right out of it! Ahhh..man, now I gotta go listen to it...haha
Can I upload an entire instrumental song and have udio add lyrics? I have a ton of instrumental tracks that I did a while back but I can’t sing or write lyrics to save my life.
I believe so. You should be able to upload your track and then "inpaint" vocals on them. If you're looking for a specific voice like yours (or Mayers) you'll have to go about the way I detailed it in the video.
Seems like there is a lot of workarounds here though! Need to experiment more!
@@TheoreticallyMedia Very cool! I’m gonna try it today!
For video snippets are you using LTX studio.
This AI music is great for fan films and shorts. Kinda scary creating Star Wars music out of thin air.
Yes, do the combo track audio and instruments
I recommend Bob Doyle Media on UA-cam for exploring some of the more advanced uses of Udio and Suno with other tools...
👋 Looking forward to watching this
It's a pretty good one! Some issues with the blue screen-- but nothing too major!
I don't know why it drives me crazy when people mispronounce it as "ooh-dee-oh" to the point I can't sit through a video hearing it over and over again. It's pronounced "you-dee-oh", guys.
Haha, Matt Wolfe told me that as well. In my first Udio video (when no one knew how to pronounce it) I said it both ways, but pointed out that Wu-Tang clan calls the "Studio" the "Ohh-Dee-Oh" so I'm sticking with that!
Wow indeed! So great to hear Suno is coming out w similar as I def prefer that platform, for myself I’ve gotten much more useable songs out of.
All just bananas how fast the Ai tools keep advancing
Cool, workflow creating a song from multiple generations.
4:54 It does sound like a crime drama intro lol! hehe, nice.
100% a show I’d watch too! Haha
AI is awesome for making shit that basic people love. If you have no personality or point of view then AI music is wonderful for you. I've been completely unable to create a track that fits in with my personal asthetic.
Oh, don't get me wrong-- I listen to a LOT of weird stuff that you likely can't prompt for...BUT, I do think that in our case, AI can serve as some interesting building blocks. Generate some sounds/songs-- and then mangle them up in post production and use them as elements in original music.
Right out of the box? Well-- you get those generic lyrics that ChatGPT provided me. I mean, they're fine-- but they're also kind of wallpaper.
(Sadly-- MOST people are into wallpaper music...)
UDIO to midi stems is what I'm after. Then we can fill in the blanks of our never-finished tracks.
I tried UDIO > Stem separation > Midi export > Ableton Live > VST, but that hasn't lead to anything clean. Too many instruments and samples get lumped into one stem.
Then I tried RipX DAW to clean up the AI-generated stuff to no avail. It's basically the same problem.
Anyone an idea how to proceed?
I have lost sooo much sleep using this feature
Hey Tim, what is your Udio profile? I'm always looking for more music to listen to on Udio. 😀
Should be under TheoreticallyMedia?
@@TheoreticallyMedia Found ya, ta. :-)
Very cool feature!
I'm super excited about it! The possibilities are kind of mindblowing.
Ooof. Those John Mayer stroke quasi-lyrics 😅
I mean, hilarious!!! I’d actually love to listen to a Mayer love ballad with stoke lyrics!
I want to upload an entire track in less than stellar quality (mostly from cassettes) and have it reproduce them like they were done in a professional recording studio. I may have to pay the fee to test out what is possible.
3:47
I actually MUCH prefer this version over the first
YOUR WELCOME I SUGGESTED THIS TO THEM IN DISCORD LITERALLY WITHIN A WEEK (I ASSUMED A YEAR OR MORE TO ACTUALLY MAKE THIS I CAN'T IMAGINE AI IN A YEAR NOW) I have been wanting this forever and i double triple qaudruple checked It was me who suggested this ... which I personally see this and its release day being the most important thing in music history of all time / useful invention since thumbs. Your welcome from Jokrtherapper
Excellent!! Uhhh, can you go over to the discord and suggest that you and I win next big powerball lottery? =)
Haha, but seriously: agreed! Can’t wait to see what things look like in a year! If we’re here already, can you even imagine?
Speed up the vocal in a separate file, then reinsert the vocals in the instrument
The instrumental behind "John Mayer" sounds off, like in the wrong key
just here to say james ronny DIO also killed it :D
Haha, I don't know if you follow the channel Drumeo (Drummer focused channel, and although I don't play drums, I LOVE it)-- but they just released a video of a young drummer named Nandi Bushell recording a version of Holy Diver, having never heard it before.
She kills it! It's an adorable video! High recommend!
Now this is exciting! I was thinking of cancelling...
Part of me wants to try this but I just know that years of my life are going to slip by once I'm sucked in.
not shure if i got anything userul from this?
"..sounds like John Mayer is having a stroke..." 🤣🤣 LOL