The next step for INSTANT MASTERING!
Вставка
- Опубліковано 12 лип 2024
- Check Diktatorial here: diktatorial.com
► Hire me for your music: whiteseastudio.com/
►Thomann affiliate: whiteseastudio.com/thomann
►Sweetwater affiliate: whiteseastudio.com/sweetwater
►Plugin Boutique Affiliate: whiteseastudio.com/boutique
►Become a member: / @whiteseastudio - Навчання та стиль
Considering it’s AI and probably trained on more mainstream material I don’t think testing it with such a niche genre without vocals really gives a very good understanding of the capabilities of this service.
Darude Sandstorm is quite popular though
Thanks for the roast Wytse. Absolutely benefitted from your feedback. Please check us after we moved away from Beta period as well, we are constantly shipping upgrades. Gain compensation is on our to-do. Best.
Anything without gain compensation (demos, plugins, services, adverts . . . etc) I pretty much reject as a con straightaway, unless the product is stunningly good (and deserves a little more time / attention), ads get skipped, services ignored, plugin demos dumped.
The little motivational quips like _"your song is ready to shine!"_ and "your song is ready to captivate your audience!" (etc etc) are horribly patronising, since when have we needed to treat everyone like small children ? . . . You may as well put "OMG your song is so cool!" after each prompt . . . what would be far more useful would be constructive criticism, I'd rather see "with too much warmth we're starting to hear mild distortion" or "I think the stereo width is a little too wide, taking focus from the vocals" (etc etc).
"Gain compensation is on your list?" Seriously? Stop smoking weed at work guys.
In the English language, the plural for feedback is feedback. 😱
@@StotheEtotheBOf course they don’t want to implement gain compensation because they would lose the only advantage aka loudness 😂
I like everithing about your channel & content ! that was so cool!
I love these services because the non-clients (aka clients that probably will refuse to work anyway in the future with)that probably doesn't need a mastering service but just want to feel important giving interminable orders and changes can just play there. I would love to see if that service can even maintain a similar sound along a whole album
LOL … after gain matching the premaster sounded better. What an achievement 😂
We will never have gain compensation 😂
We are on it!
Very very happy i watched this one😂
A.I. is creeping up to ya... perhaps make some video' about how you master audio... no more plugins, a.i. things... but also your talents... Or both, first about plugins andthen how you would do it... cause we need some leverage against A.I.
would love to see/hear more about your solar stuff thanks
This is exactly what I suggested you to work on almost one year ago :( you already got the intuition back then.
Totally agree with you on the need for gain matching on this. Otherwise you have to pay a credit to really understand what they’re doing, by importing and gain matching, like you did.
When you gain matched, I preferred the premaster to the “master”.
Love the Glenn moment.
Hit 'em AI hard White Sea! 🤜🏻✨
😂 awesome reaction
It comes from the Fletcher Munson (hearing) curve
I wonder if you had asked it to level match the mix until it was time render out the master if it would have been able to do that?
Record Labels first use of A.I. :
''let´s steal everything we can from them''.
But it goes to eleven?
it doesn't matter what streaming services request. master how it sounds good and let them adjust it.
What would happen if you wrote "Keep the same loudness as unmastered" in the prompt?
Could you add ‘gain match original’ to the prompt?
Chill, I’ve had a long week and I feel like someone is shouting at me.
Do these prompts get stacked or does any new prompt only apply it to the original audio?
From what I've seen here, the chat doesn't seem to follow the instructions accurately. What you could normally do with a simple knob adjustment seems to require several prompt inputs which might trigger other unexpected changes.
I think this could eventually work at some point, but like you said - will it ever tell you to go back to the mixing stage to fix something?
Yep, soon, mix analysis is coming!
@@Diktatorial that's good news!
Ask it to volume match🤷♂️
It kind of made a bit of it better and a bit of it worse lol
Is an interesting idea of voice input. Would be interesting on a mix like Hihats are too loud, can’t hear the bass etc. I predict Moises is only a step away from this.
What's up with the Hi-hat? Tjitt, tjitt, tjitt.
Fighters!!!
hell yea. scream that shit.
SoundCloud offers a similar service for mastering. I believe they are mostly concerned with making sure the tracks are not too loud on the site. Powered by Dolby?
I like instan Nudleezz!!
I wonder if it was able to detect and remove the deliberate section of noise. A human mastering engineer would have noticed such a problem straight away. I wouldn’t be surprised if the AI service would just overlook it and a keep the noise in there.
I feel your pain.....😂
I prefer the premaster, and it's not close
Plz review aams audio auto mastering system.
hee goeie video weer! ik zou een AI mastering iets nooit aan mn tracks laten zitten. Ik vind de human feel belangrijk bij mastering. Ik weet dat je nederlands bent dus het is makkelijker voor mij om in het nederlands tegen je te praten haha. Ik kan ook goed engels, maar dit is even handig denk ik. Mijn idee van mastering is niet zozeer waar het nu op gericht is gewoon keihard tegen die 0db aan rammen. Weg zijn je dynamics, vroeger zette je gewoon het volume wat harder van je versterker. btw die stereo widening die ze erop zetten is gewoon lelijk. ik vind de premasters fijner klinken dan die AI masters.
Interesting topic! AI's market share in the engineering industry will explode to at least 70% for sure 📈🤔, although it will not completely replace its human competitors effectively until it can accurately estimate noise from basic target signal without user prompts -- possible, but tricky! And, while it's true that tasks in mastering (particularly using programs already in the box) are quantitative and therefore something AI could parameterize and optimize, yet... relative to what? Human engineers have been struggling with this problem for as long as the industry has existed: AI, like a person, will need feedback to figure out what its human clients want as it strives to keep improving, yet to be a top-tier engineer by yesterday's standards, it would need to balance the expectations of at least three clients: the current music producer's needs, its own identity (I.E., the "sound" or "style" of the engineer -- think "Abbey Road" versus "Nashville" and so on), and the listening public. The first is a cinch to measure (although perhaps progressively and nonlinearly slower to develop due to limitations in how well a music producer can understand or express his/her engineering needs), hugely difficult for the second (since in order to apply its own sound, the AI would need not only a developed sense of its own preferred -- and commercially successful -- style, but also logistically be able to slightly contradict client input prompts in a "work with the client" kind of way), yet the latter is the elephant in the room. The taste and tolerance of the listening public is an ever-evolving, non-uniform and multi-variate beast that's hard to even dimension since the "mastering" of a song is only maybe 20% or 30% of why a listener might favorably receive music, making even large systems of binary or even scalar feedback difficult to utilize in machine learning, even if the data set becomes available. Nevertheless, AI's future is super-bright: I'm pretty sure that the same way rolling shutter, hyper-log-gamma and digital compression is now tolerated by people watching videos and movies, whatever quirks AI engineers create will be something people get used to accepting as it becomes more mainstream. Happy mixing!
Hopefully they're listening to your suggestions (and will pay you some royalties)
Please make it sound like what i forgot to do in the mix. Or, make it sound like something it not has. (hopeless effort that AI if you ask me)
the stupidity of mastering to -12db is still alive in 2024..I wonder when people will finally understand how streaming platform algorithms work (Spectral band replication (SBR) )it makes no difference if you do -6 or -14 lufs the damage is the same, all streaming platforms requirements are only there to mask this damage a little that is all!
Yo Wytse. Relax man... 😉
cause it is louder Alex Jones moment. Loudness is Satan! - I am only listening on a laptop but the Diktatorial does seem to bring out the attack a bit on the guitar and drums little punchier with slightly sharper character, the premaster sounded a bit duller, so I do think it did improve the sound character a bit adding a bit of what I would say is transient attack and smack. Could you ask it to do click and pop removal though?
Loudness TURNS THE FROGS GAY!
Loudness is not Satan … it‘s a part of making a good and competitive master. But more important is that the frequency balance doesn’t fall apart and you have no added pumping or artifacts (if it‘s needed it should already be in the mix). A good master is not only louder … it sounds better when level matched to the mix.
Loudness is a game changer in these AI mastering battles , nothing common to the real human mastering at all 🤣🤣🤣🤣😂😂. People in these days don't listen to the music using good ear or overhead phones. They don't know the difference between good or bad ones because they don't care. Good punch on bass, muddy mids, it's all right folks! Face the fact - they don't care how mastered song sounds like. Give them more loudness 🤣🤣🤣🤣🤣🤣
hahaha good chat
Gain not pain
On the AI master, there’s some phasiness to the transients and overly aggressive high passes. At least that’s what my phone speakers tell me.
It reminds me of the harsh brittleness on a lot of AI generated music.
First, I like your channel, and the things that frustrate you often frustrate me. AFAIK, the issue with True Peak is there's no "one-size-fits-all" setting. It only matters when transcoding to a compressed format, and how the compression affects the audio depends on the program material. So, lower true peak values are "safer," which I believe is why streaming services ask for -1 or -2 or whatever. I did some tests of doing data compression with TP at 0, -0.5, and -1.5. At 0, there was a low--level fuzziness added to the audio when compressed. It was still "kind of" there at -0.5 TP but it was definitely gone by -1.5. Perhaps the phenomenon is like time-stretching or "Acidization," where phase changes cause additions that can go above 0. Or maybe not... :)
Wytse, surely you should be able to say to it "AI, gain match (or loudness normalise) this" ??? Dunno, just seems to me you should be able to tell it to do that.
or ask him:
make it sound like crap, like a megaphone in a public bathroom,
and hear what comes out.
please keep on ranting about autogain. I'm sure, the developpers hear it.
It should simply be a industrystandart.
It's simple. You have two options: 1) You either understand audio well and can do the work yourself, or 2) you don't understand it well and you try to come up with "creative" feeling-based prompts hoping to get something out of it and spend a lot of time on just this without knowing what you're doing. The end result will be different and I am not afraid of AI any time soon. Nor of people trying to work their way to audio through just using prompts.
It's like playing dice poker where the locked dice still change values with re-throws but not as much as unlocked dice. And the fact that you cannot determine precisely which dice you want to even lock. Not only that, but also the chances for each dice is not independent and uniform, but rather the outcome is always biased towards predetermined number combinations, none of which are better than three of a kind.
So yeah.. Try playing dice poker with those odds.
1:38 😂😂😂
I like premaster more 😊
You use lots of outboard gear. Since my own homestudio is expanding with hardware synths and fx boxes, perhaps you can do a video on how to plan this with cables and avoiding hum and such. (Ik was net van plan wat aan te schaffen bij Thomann. Ik ga je QR code gebruiken)
I wonder what happens if you load an ABBA song tell the AI make this sound like motorhead 😂
I feel like the "mastered" version loses too much of the middle information, am I the only one hearing this?
Why should I mix careful and master fast?
I would only do that as an AI Reference.
Wyste is a dangerous animal!! 10 🥵🥰
AI tools are revolutionizing music production, enabling musicians without the resources to create and share their work. However, there's always the other side of that same coin. While I will continue to mix my music in my own way, AI will undoubtedly play a role in shaping all of our audio experiences.
finally AI will get this guy off youtube, see ya
Soon AI will be making UA-cam audio engineering tutorials and reviews..... Where "it" will be critical of other AI .... Not sure I wanna be around for that.
He won’t do a blind test of ai vs manual mastering.
I think louder is better is a Myth, and is become a Dogma.
Did you time stretch the outro? How it possible to speak so fast 😂😮
I didn’t stretch it
When audio services have shit audio in their ads I’m so baffled
I hope these services continue to flourish... they are so bad, they are sending clients my way faster than any marketing I could do.
The next step for 'INSTANT MASTERING'...mmm, 🤔time will tell🙄[edit] as always you provide great indepth plugins/devices reviews👉☕
Tbh Id rap over it and wouldn’t care about these small differences…would be more worried about the vocal and how it fits with the rest.
sounds like the AI cut way too much 1.2khz-2.4khz or so in the mid channel which totally kills the best parts of the sampled drum crunch and shifts the guitars out of balance. strange choice given the material, makes me think that they haven’t trained the model on a wide enough variety of music
Your Mic sounds distorted lol
I'm trying out a new mic actually 😅
man sollte auch mal die Kirche im Dorf lassen können mein lieber Wytse.
Don't feel like a broken record it's not your fault. Anyone who doesn't gain match during mix moves is making bad decisions, or might be I should say.
I tried waves ai mastering, as I got one free trail. It sound horrible, doesn’t make sense with super strong bass and very compressed.
1:39 ... to be honest, my daily business is from time to time a loudness war, but in the evening I don't like to be screamed at by on UA-cam. It is simply not necessary.
MY COMMENT READS BETTER!
I automatically started singing " I wish ".
Yes, now I can "correct" my projectsx using AI, but no one is going to correct my bad taste, if you know what I mean.
Their AI apparently only knows what wide-band compression is, and chooses VERY poor attack/release times. What an annoying amount of compression flutter.
What would I think of these Dynamic EQ newer plugins, are the automated EQ very calm variations made along previoud tracks now looping somewhere?! I don't know. What's the next frontier, pay my bills so you wouldn't sound like one of those flatliners? I don't know. Namastè.
Please, no more fake shocked face thumbnails. You're better than that!
I wonder what would have happened if you'd asked it for a level matched master?
But in any event, all I hear is that it made your mix sound thinner and louder.
What does this do that anyone can't do fro himself with a limiter or finisher plug in?
nothing.
People don't understand what real mastering actually is.
Chill out ouwe....Lol.
Sounds a little wider but.. ugh. Hard No, just use your ears, you’ll hear it
11 out of 10 Mastering Engineers will confirm that Mixing is bad for your eyes
From the start, like you said, I am immediately suspect when a UA-cam vid about audio has bad audio. The worst is when the audio examples are not volume matched with youtuber’s voice. I can forgive it somewhat on live streams.
Comment for da algorithm
Streak count: 250
🙏🏻
I dont think we're going to see AI take any audio engineering jobs in our lifetime, it's the same situation as with programming, you need a human for the subjectivity. Cool video mate
My guess is 10 years from now it will beat every audio engineer on the planet, subjectivity can be faked easily if you have got the data and set the right specified goal for the AI
@@larsborst7121 It's going to be interesting seeing where it goes for sure
the premaster sounds better than their result.
"OF COURSE IT'S BETTER, BECAUSE IT'S LOUDER!!!!" Snake oil at its best.
AI eventually will prevail but not yet and not in this way. I think a broader/deeper analysation and millions of references could eventually give 20 different sounding masters and you never can ask the AI to pick the 'best' because luckily AI has no taste better then the middle of the road. I think the advantage of AI is only worth something when the choice is human. Once AI would have analysed and differentiate the human (your own) choice for a decade in thousands of choices then there is a synergy which could be helpful. Today AI is so stupid it knows all about the average but noting specific for the current use.
I think a time will come where it can do 20 masters and pick the right one for you based upon what it learned from your own input in A/B quizes and your preferred music and some other data that you provide that seemingly has nothing to do with music at all. Genetic Algorithm based AI
@@larsborst7121 Nou lars ik denk dat je geen snars begrepen hebt van mijn woordjes. Alsof ik PRO AI ben....
Lets keep this in English, its a bit rude to reply in your own language that looks like gibberish to normal people😂. I’ve no idea what you’re on about, but my reply has nothing to do with you being pro AI or not. Not even if I am. You have sketched a scenario that you think might be plausible and I reacted because I think you maybe right, but I guess that AI might take even the last hurdle and be able to come up with a master that is tailor made for your taste. Quite scary actually btw. So now please tell me what I didn’t understand???
BTW Dutch is also my mother language so that part was just irony
@@larsborst7121 Irony? like? Like I talked Dutch to you by coincedence?
No difference on a phone
I share your anger and frustration with the lack of gain matching. I can fool just about anyone with just 0.5 db of gain increase into thinking the sound is suddenly 'better.' I've done it many times in my mastering studio, secretly of course, just to elicit a reaction. I always reveal my ruse and explain why it's so important to gain match comparisons meticulously. It's shocking and a bit disturbing that people in the audio world continue to use this sneaky trick to sell their plugins (and now AI mastering sites use it too!) to unsuspecting consumers. BEWARE of this trick, don't let your ears and brain be fooled! Always take the time to gain match ALL comparisons, no matter what. Your bank account will thank you!!
For me the mids in the mastering version are still weak and a lot more, so by now and for this example I cannot find a benefit from this service but one can argue the price is hot and it is not time consuming. For me, these services are not able to beat "real" mastering engineers for several reasons.
*Eventually*….. it seems like you can get there by using multiple prompts until you finally land on the sound you require. However the result is just the one mastered track.
As an artist you want to have total parity between ALL of your tracks. You want them to have similar tone, dynamics, saturation, width, depth, presence etc etc…. And, this text-prompt style AI approach doesn’t seem to be the way to achieve that across multiple tracks.
Far better to have an understanding of your hardware/plugins and to do the mastering yourself - or hire a pro (of course 😊).
"The next step for INSTANT MASTERING!" ... just the "!" made this hit like click bait lol! What a difference a [?] can make.
I don’t want any ai in any creative work it doesn’t matter how good it eventually gets 🤮
Sounds like they are just using an old copy of ozone LOL
Why did they pick the voice of a scam call operator for the Ai???
because it looks like this service was created by one of scam call centers in their spare time))
hahahaha it's actually the voice of our CEO 🤣
Maybe it's all a hack to get your bank account details 😂
@@Swiftopher755 hmmm feature coming soon... :D
@@cestlinn That's fabulous😂