You are the single UA-cam voice that I trust 100%. Others may be at 80-90%. And the rest are grade-E. You are the only engineer on here that confidently demonstrates that he understands the science behind the art… and how to appropriately and masterfully apply these concepts. You’ve leveled me up after each video, and I’m grateful. Thank you, Mr. Worrall
Yeah he’s easily one of the only influences in my life as an aspiring audio engineer who keeps me going. Dan just gives you amazing knowledge. Anyone else will try to charge you $150+ for a half-assed “masterclass”, which usually ends up as a gloating session for a lot of producers. Shit you not I paid $100 to learn from an artist I used to look up to and everytime I asked a question I’d get “idk what I’m doing I just fill out the frequency spectrum” 🤡
Even though, half the time I don't know what you're talking about, it's strangely hypnotic and really interesting. One day it'll all click into place. Keep them coming. One of the best channels on UA-cam
i have this mindset towards a lot of things but its important to remember that it only will click into place if youre regularly actually doing work and applying the knowledge youre taking in, active learning above passive, easier said than done though, best of luck
That's because he once was a stage hypnotist, in the same sense that Mickey Mouse was a broom hypnotist in Sorcerer's Apprentice. And all the world's a stage, according to Dr Spock.
The information Dan is giving does require certain amount of basics about math and physics. And that can't be explained easily in a video, that needs studying and reading. Though, to understand this one, high school basic math is enough. I mean, we weren't really explained how you sum signals and how decibels work until high school/vocational school/ polytechnic.
Everyone said this to me when I said I wanted to start a channel sharing my knowledge. "Why give away all your secrets" They were worried because I was their best engineer/producer. But none of us progress if we don't share what we've figured out. And there's no danger anyway, the 'secret' is just 20+ years of ear training and practice there's still no short cut for that regardless of how many tricks you pass on!
For anyone using Logic that doesn't know, you can make any plugin stock or 3rd party plugin mid/side by choosing dual mono mode on a stereo track and changing it from Left/Right to Mid/Side.
one of your older videos where you demonstrated Voxengo's Sound Delay, in M/S mode, using very small delays between the mid and side (either positive or negative mid relative to side) really opened my eyes (ears?) to how beautifully you can effect the stereo field and perceived depth of a sound with such a simple adjustment. From there, as above, thinking in terms of effecting only mid or only side, or each in different ways or amounts is, as you said: Magic. "Thanks again for" creating.
🎯 Key Takeaways for quick navigation: 00:00 🧠 Mid/Side matrixing may sound complicated, but it's simple to implement in code using Reaper's JS programming environment. 02:12 🔊 When deriving the mid channel by adding left and right together, remember to divide the result by two to avoid clipping on the mix bus. 03:08 🎚️ To maintain the same gain in mid/side processing, divide by the square root of two instead of two in both the encoder and decoder stages. 04:30 🌟 Some stereo distortion/saturation plugins offer a mid/side mode, which can significantly affect the sound compared to traditional left/right stereo. 06:10 🔄 To achieve a normal stereo width with mid/side processing, compensate the mid channel by dividing by two in the encoder and the side channel in the decoder. 07:08 📝 If you're a Reaper user, you can find the presenter's Mid/Side matrix code in the video description for easy copy-pasting into your JS plugins. Made with HARPA AI 👍 Upvote to improve video surfing
Hey, Dan. The division by root(2) normalizes the linear transformation you've done of the plane, say the (L, R) plane, so that it's a 45° rotation of itself, and we might call the target the (M, S) plane, which is the same thing (just R^2, of course). Written as a linear transformation, it's the 2x2 matrix of 1s with the lower-left entry a -1 (for the inverse, it's the upper right entry as -1), but with the scalar root(2)/2 (= 1/root(2)) as a multiplier. There are a couple of interesting corollaries to this, namely, that the vertical movement of a stylus on a vinyl record is actually the mid channel (using the root(2)/2 version, that is) and the horizontal movement is the side channel. Secondly, there is a continuum of rotations of the (L, R) plane that are possible, not just 45°, which could be used exactly as mid-side is used (e.g., rotate the plane by an angle theta, do some processing, and rotate it back by -theta), and might be musically interesting. It would be absolutely trivial to write these transformations for reaper, once you have the mathematics of it conceptualized.
You are absolutely correct! You're describing the Blumlein stereophonic field. Can you please point me to more in depth information/background on this subject? Jim
I love the education! And yes it's only a select few distortion plug-ins that has ms mode on them. It's quite inspiring on how it affects sound. Thanks Dan!
@J Right, because that's what Dan's channel is all about, not insisting on details and getting things right, just being content with wishy-washy brushed-the-surface does-the-trick. And for what it's worth, correcting someone's incorrect use of a word is not a grammar lesson, it's a vocabulary lesson. You're welcome.
Phase is correct. First, because flipping polarity results in 180 degs phase shift. Second, because it's common usage and everyone understands the term. You're correct that I usually try to use the term polarity when that's what I'm talking about, but only because it's more specific, not because it's more correct.
@@DanWorrall I respectfully disagree. Common usage does not make something right. People say Kareoke, Weiner and Janurary all the time, and none of them are correct. By "invert the phase" vladg meant to say (or so I assume) that samples on that channel with positive values become negative samples, and samples on that channel with formerly negative values become positive samples. Positives and negatives are the literal definition of "polarity". So "flipping polarity" is the correct term to describe "making positive samples negative and making negative samples positive". The term "phase" refers to the position of an oscillator or sound wave in its cycle, this could be handled in degrees (0° to 360°) or a float value range of 2π (-π to +π, or also 0 to 2π). Take a sine wave, because it's easy. For the first quarter of the phase range (0° to 90°, or -π to -0.5π, or 0 to 0.5π), the sample value has positive polarity and rises up. For the second quarter of the phase range (90° to 180°, or -0.5π to 0π, or 0.5π to π) the sample value has positive polarity and falls down towards the center silence again. For the third quarter of the phase range (180° to 270°, or 0π to 0.5π, or π to 1.5π) the sample value has negative polarity and falls down below the center silence. For the fourth quarter of the phase range (270° to 360°, or 0.5π to π, or 1.5π to 2π) the sample value has negative polarity will rise up again towards the center silence again. A cosine is effectively like beginning a sine cycle halfway through, and plopping the sine's first two quarters at the end. So in a scenario where a sine wave and a cosine wave both have the same phase reset, the sine wave will start into its "upward hump" first and go into the "downward hump" later, whereas the cosine wave will start into its "downward hump" first and go into the "upward hump" later. My point is: at, say, 90° into the phase (i.e. 1/4 of the cycle), the sine has a positive polarity and the cosine has a negative polarity. But both are at the same 90° position in their phase cycle. This is like recording a top and bottom microphone on a snare drum. They both start receiving their signal at (as near as damnit) the exact same time, i.e. their phase cycle kicks off at the same point in time. And yet, the signals they record will have inverse polarity, the bottom mic will probably start with a positive transient and the top mic with a negative one, or vice versa. And we all know what happens when two fairly identical signals with inverse polarity along their phase cycles meet: they start cancelling each other out. Just like adding a sine and a cosine will result in total silence. It's just a mathematical fact. Sticking to the example with the inverse snare mics, there are exactly two ways to solve the problem. The fastest, most common and effective solution would be to "invert the polarity" of one of the mic signals, so that whatever was sub 0 before now sits above 0, and whatever was above 0 before now sits below 0. This brings the polarity along the phase cycles of both microphones into sync, both now "go up" and "go down" at the same time. Another method would be to adjust the timing of one of the recordings, i.e. drag it ahead or behind on the track, thereby also kind of approaching a state where both signals "go up" and "go down" at similar times. The drawbacks are obvious: the adjusted signal starts its phase cycle earlier or later, meaning one signal is shifted in its temporal relation to the other signal. Timing issues, definitely flams and slapback effects, maybe even comb filtering. Because the shifted signal's PHASE is no longer aligned with that of the unshifted signal, while their polarity now is. Phase and polarity are two related and yet entirely different concepts. And it's so simple to get it right.
'its not so complicated, really lets write some code!' dan you're the best! for real! i am very thankful for the cheeky knowledge you implant in my brain!
Again... Dan is one of the few who can truly call themselves an audio ENGINEER... I've met too many who proclaim themselves to be audio engineers, but lack the scientific understanding of what the hell signal processing actually is, how human hearing and perception really works etc. But Dan... he's an ENGINEER.
what is crazy is that I have been experimenting with this since I got the non MS version of hg2 as Dan did in the giveaway earlier this year. thank you for the cool insights!
Variety of sound uses mid side a lot of for his saturation algorithms. And they sound glorious imo. (he wrote the Slick EQ saturation code for Tokyo Dawn if you recognise the name btw)
Thanks for signposting variety of sound - hadn't heard of him before but it seem like really valuable freeware. Lots of 'good as/better than paid' kind of comments around it!
@@joecm you're welcome, enjoy! Been using it professionally for a decade. One of the best transformer models out there. Probably only 2nd to fabrice's work with slate imo. Very subtle stuff if you haven't got the ear for it but just very good 'flavour' gear, free or otherwise.
This channel is genius and I was getting back to programming...so this is actually beyond perfect as I can do some audio coding!! And as I am a neophyte on mid side....it will be a brainiac exercise par excellonce! You're amazing.
I did precisely this with JSFX years ago (including compensating the volume using the square root of 2 so I could use a single plugin for both). I'm rather chuffed to see something I worked out myself in a video from someone I respect so much.
would love to see an in-depth tutorial about M/S compression with an emphasis on not just what it is or how it works but *why* you'd want to do it and a few demonstrations of its possible effects. Also--the backing track is your best tune to date, IMO.
At first I was worried I was gonna have to save this video until I got into the studio to hear the differences but even on a phone this makes a huge difference and another tool I look forward to trying to exploit on my next mastering session! I’ve used lots of MS on compressors and eqs but never saturation, never really came to mind
Ableton's 'Utility' plugin. Right click over width, and mid/side becomes a ratio. Waves StudioRack. Create parallel channels. Toggle the setting at the top of the channels (stereo, l, r, mid, side).
Midside is my GOTO since years, not on all songs for sure but the ones i mid and side and then even sidechained the sides with the mid as the input 😂 all those tracks really sound insanely good on my IEMs
Mid side needs to be a lot more mainstream than it is! Thanks for pushing the ball a few more yards down that field Dan. The amount of mixing I can accomplish just pulling away junk with ProQ 3 mid-band cuts continues to amaze me.
Love your thoughtful explorations, as ever. The vocalist on the music you used for this tutorial has a clear, soaring voice that reminds me of Annie Haslam. Who is she, and what is this music, please?
Would be cool if you could teach us how the "focus" and "field" stuff in the Mr.MS plugin works. It seems really interesting, making side information from the mids and vice versa, but it's only avalible with I-Lok so It'll sure as hell not end up on my PC.
As someone with a modular synth, Mid/Side is a great way to use two different analog synth filters on a stereo source without running into accidental panning issues (because e.g. you might have two different filters instead of two perfectly calibrated ones of the same type).
Is Dan Worrel the Jimi Hendrix of DAWs? Seriously Dan, we love your stuff and IMO it is next level. Thank you so much for sharing it. FWIW lot of us speak code. It will be like knowing how to use a compressor in the future.
Dan thanks a lot for this, only you would be able to explain this in such simple way and at the same time include a big nugger in the middle of the video. you rock for real.
Interesting that you used the Black Box HG-2. I wonder if the MS version of that does any gain compensation for the side channel? I confess that when they released the MS version of that plugin, I wondered what the point was.
dan, man these videos are gold and the depth of your understanding is unmatched, where have you learned all this? have you figured it yourself or was it through school? i really need to know
I found M/S very early in my career, when I would try anything. Many of my recordings are either only an M/S decoded pair - usually U87 - or traditional multitrack with a featured M/S pair. There are countless advantages along the way, like a perfect center channel whether or not I have the time to make stereo. The ability to edit the microphone ‘positions’ after the fact is as close as it gets to a free lunch.
If you want to give him money you can just buy one of his t-shirts or leave generous $ comments. I’m sure you mean you want a deeper course explaining everything so you are an amazing mix engineer right away. Unfortunately years worth of hard listening is required to be a good engineer, regardless of the information someone else has told you. That said, i would definitely enjoy a deeper dive series like that.
I don't think he's pushing it to new places as much as he's showing how low the bar for an "audio ´engineer´ " has been for a long time... it's so often self proclaimed title thrown around by people who do not fill the requirements to call themselves engineers to impress their clients and bill more money from them. I've always hated it, for example I've met so many high earning guys that do not have a clue how compressor really works and what happens to the signal and how it's distorted by different adjustments on the front panel... they just wing it and by trial and error they find something "pleasing" without any clue what they're actually doing. To be an engineer is to be well read in math, physics and to have been spend those ~4 years after high school in higher education and to have gotten a degree. Or at least have all that knowledge expected. If you do and ask a self proclaimed audio engineer "to explain what does Nyquist Shannon Sampling Theorem say about AD converting a signal and DSP" and they look at you like "a what now??"... they're full of it. It's basics and fundamental theorem that should be understood by an Engineer. Among various, various other concepts around signals and waves. Good audio engineer also understands basics of electronics (on component level, how to make circuits and how different analog circuits work) and programming (because of DSP) and because 99% of audio is about electronics recording, altering and reproducing signals. If they don't know what "ohms law" is, or "draw me rough sketch of how mic pre-amplifier works" or "if i want to break down a sampled signal into it's constituent frequencies, what is that process called?" Almost any true Audio Engineer can do these things... they might not remember the details or be specialized on mic pre-amplifiers or DSP as such, but these are the basics that you need... I know all this stuff. I'm electronics designer, AV technician, programmer, AV service technician etc... but since I haven't done to work and gone through the higher education, i restrain myself from calling myself an engineer. And so should everyone else if they haven't done the works :)
Ow, I missed the part, that this is the old style Black Box, the newer version has M/S on it. I always remove the toolbar on the bottom of the plugin, mostly to ignore that TMT rubbish, especially for mastering plugins that TMT crap isn't helping at all. That said, I like the Black box very much and in M/S mode it does real magic. Thanks, Dave, for pointing this one out to the Reapack stash.
the other day I was thinking about how with balanced audio you could make a passive encoder and decoder, i've been meaning to do it i just keep forgetting, def making me want to though
Holy s*** I've luckily never needed to use the Enc/Dec in MSED, I have MS plugins. I've had it for many years, of course because of you. lol But no where in the Manual does it clear up that the Side channels would be more quite than the Mono. Wow, & that's why FabFilter approaches you for them tutorials❤🔥. Loved how you said at the end "or adjust it by ear like a normal person". 👏(I'm always adjusting either way haha)
M/S is one of the final frontiers I'm trying to tackle in my mixing repertoire. can't quite wrap my head around where it fits in my processing chain and why I would use it. I know what it does, but it's not quite intuitional yet. hope this video answers some of those questions.
Shoutout to Mid/Side for being easier to work with in general than L/R (unless you're mixing only acoustic audio/instruments in which case L/R is specifically very useful). I'd like to also mention Mid/Side expansion (or compression) for being very useful on all sorts of things, gating the Mid of a reverb bus more aggressively than the Side can be a useful way of making room for other elements (as is expansion/gating reverb in general).
One thing that has been bugging me about mono summing is that you will get a cancellation. What I have been doing as an experiment is to isolate the "side" bit, invert polarity on one of the channels, and THEN sum it. This lets you keep that but that usually gets cancelled.
You're adding the side channel to the mid channel = the left channel. Don't overthink it. The sides are supposed to cancel in mono,: it's literally the difference.
@@DanWorrall I find that in mixes with double tracked guitars, doing this keeps the guitars sounding equally loud in the mix VS vocals and drums etc. I never have a need to sum to mono, so it's honestly just a stupid little technical pet peeve of mine :p
@@DanWorrall when I first discovered mid/side EQ i tried doing somewhat surgical EQ on double tracked guitars, separate treatments on the mid and side parts. Then one guitar stopped for a part and only the left guitar played for a bit. Dear god the resulting sound was abhorrent.
Think of the cancellation as an advantage! When there is only one speaker channel, it's easier to over clutter the mix. If you place non essential tracks and effects toward the side, they will cancel in mono and leave room for the more important sounds.
@@sparella in a bunch of genres the presence of the hard panned stuff is very important, and the cancellation is a bad thing. so it depends on the context and perspective.
@5:49 How much gain should you add to the side channel for it to be the same? Or is it best to just judge it by ear? The distortion comparison at the end, on the full mix, is really interesting, really gets you thinking of how we can use this to our advantage! Thanks
I’ll be right back; let me go enroll in “Learn to Code” 101 before I can understand half the stuff Dan talked about here. Great video Dan! Thank you for your insight and knowledge.
You could still have just one MS encoder/decoder plug-in if you added both encoding and decoding modes plus some way to switch between the two. It would be a bit more code, but for such a simple plug-in it wouldn't make much of a difference in terms of size. Also, it would be great to have MS processing as a feature for all plug-ins in Reaper, just like the delta-solo and oversampling features!
Well, both encoder and decoder exist as stock JS as well (though these use the sqrt(2) method by default) and I've used them a lot because you can always route side only or mid only through any effects and do so easily in Reaper. The same is true for multiband processing, where you can use the band splitters and band joiners (3,4 and 5-way as stock JS, but from there you can easily work out the codr for more bands). I got an FX chain preset with just the mid-side encoder and decoder, with linked gain for the decoder and encoder, so I can split up the 6db of gain reduction between encoding and decoding any way I like with a single control for mid and side. Then it's just a matter of moving and plugin I want to operate mid/side between the two.
@@simongunkel7457 Chekc out Airwindows EdIsDim, GUI-free VST/AU pair that does this. Also Airwindows MoNoam which is a mono-ized mid/side which sometimes works even better than mid/side encode decode pairs. Just makes doing this super-simple (and has some additional combinations/trickery built-in) and easy to experiment various combinations and settings.
Pardon the ignorance, but can one achieve the same by the parallel routing method? 🤔 My DAW is FL Studio, so I'm not sure if we have the same options when it comes to the mixer. I often multi-route a given input and adjust the outputs on a pertrack basis, and say add grime to half a signals frequency range while doing something completely different to the other. I have a plug-in that does Mid/Side, and the only caveat is that it utilizes one of the send channels to split the signal instead of a standard mixer track. But I don't really perceive the difference. I'm interested in understanding the topic in more depth though if you can suggest some literature. Very cool demo, regardless 👍 Thanks for sharing ✌️
Brilliant video. I always thought mid/side was some kind of audio voodoo that only engineers who wore aviator sunglasses and drank mead could comprehend. Thanks!
Thanks mr Worrall. impressive skills. my solution would employ a bit of algebra with a level variable to solve for the actual ratio of relative gains to be encoder and decoder forward/backward compatible.
I don't think I'll ever go as far as to using code for anything even though it seems very interesting. So, I think I'll just stick to the Voxengo MSED device for now. 😊
Very interesting video. At 6 13 you declare left as spl0 + spl1 * 0.5, this would mean you are adding half of spl1 to spl0? Shouldn't that be (spl0 + spl1)*0.5, because multiplying goes before adding? Sad my DAW has no scripting engine, that would be very exciting. Thank you for what you post, it's always a welcome brain exercise.
Makes you wonder why Elysia are (as far as I know) the only ones to offer a LR / MS analog saturator (the Karacter). Thanks for the info, and the Javascript 101 class too ;-)
Dan, in the last screen shot of the code that you showed, you left out the parentheses, so the multiplication by 0.5 would not, due to operator precedence rules, be applied to the sum. Otherwise, thanks for another great video! And, I use the black box hg-2ms, which I love.
But wait... What if i want my side information unlinked for left and right sides? So left side, middle, and right side channels?.. Hmm... Never thought about it before.
That's neat Dan, the mid/side saturation tip is definitely something I will keep an ear out for. But instead of tweaking the encoder/decoder maths, couldn't you just treat the mid and side separately in mono saturation plugins and use the gain knob to achieve the same effect?
@@AlchemicalAudio any plugin can be set up with two instances processing mid and side. How simply will depend on your DAW: it's easy in Reaper if you know how.
Sometimes when you press mid side on plugins it puts the stereo on one side and the mids on the other, and some with the encoder Pre built. The Manual never really tells you which one it is
Re the difference between polarity and phase. Am I missing something? A phase shift of 180 degrees, produces the same result as inverting the polarity of a sine wave. But only a (single sin wave), sound is nearly always not a single sine wave, and nearly always inverting polarity will not have the same effect as changing phase by 180 degrees. In a 'real' (not single sine wave). representing sound, the wave is complex. Fourier transform can equate the complex wave to many sine waves added together, how do you change the phase of that? Each component sine wave of the Fourier equivalent will have a different frequency. In other words if you have a 'real' complex wave invert the polarity and add to the original it will null. (Principle of superposition). You will not be able to null it by reproducing the original wave and shifting one of them in time.
Thank you for demonstrating, i think Vertigo VSM-3 offers M/S on both 2nd and 3rd harmonics. Is it a good idea btw to bass mono your master? (I try to do this in my seperate tracks but have always wondered).
Yes, but don't go too high: I like to have stereo content right down to 100Hz and I consider that important. And watch out for phase shift changing the stereo image, see here: ua-cam.com/video/uZ9WQDojQt8/v-deo.html
@@creatingspacesproducer complete nonsense, sorry. Low frequencies are where linear phase pre ringing is mostly likely to be audible in fact! You almost never need linear phase EQ. Don't fear the phase shift!
Monoing bass frequencies is one rare exception: linear phase can do this without changing the stereo image. It's still not always better though, sometimes the change from the MS phase shift is good.
Waves drives me nuts for a number of reasons that I won't get into, so I stopped using their plugins. That said, the regular availability of M/S was nice. I'll especially miss the Omnichannel which had a lot of control over what was M/S or L/R throughout the channel strip, including saturation, which I used often back when I could still stomach Waves.
Great educational video. Actually the very simple matrix (L+R)+(L-R) =2L and (L+R)-(L-R) = 2R and we get louder channels once we know it, presumably we can manage it
You are the single UA-cam voice that I trust 100%. Others may be at 80-90%. And the rest are grade-E.
You are the only engineer on here that confidently demonstrates that he understands the science behind the art… and how to appropriately and masterfully apply these concepts.
You’ve leveled me up after each video, and I’m grateful.
Thank you, Mr. Worrall
You're leagues ahead of everyone else in terms of audio engineering skill, teaching skill and generosity. Thank you 🙏🙏
God Bless Dan Worrel 🙏🏻🎶🎵
God Bless President Trump🙏🏻🇺🇸
I couldn't fathom the depths, so I got relegated to this sub-commentinent...
@@JimboJones99 Dan Worrall would definitely make a very entertaining president as he would be calling everyone out for their bullshit haha
Yeah he’s easily one of the only influences in my life as an aspiring audio engineer who keeps me going. Dan just gives you amazing knowledge. Anyone else will try to charge you $150+ for a half-assed “masterclass”, which usually ends up as a gloating session for a lot of producers. Shit you not I paid $100 to learn from an artist I used to look up to and everytime I asked a question I’d get “idk what I’m doing I just fill out the frequency spectrum” 🤡
@@JimboJones99 😂😂😂
Even though, half the time I don't know what you're talking about, it's strangely hypnotic and really interesting. One day it'll all click into place. Keep them coming. One of the best channels on UA-cam
i have this mindset towards a lot of things but its important to remember that it only will click into place if youre regularly actually doing work and applying the knowledge youre taking in, active learning above passive, easier said than done though, best of luck
That's because he once was a stage hypnotist, in the same sense that Mickey Mouse was a broom hypnotist in Sorcerer's Apprentice. And all the world's a stage, according to Dr Spock.
You’ll get there !
The information Dan is giving does require certain amount of basics about math and physics.
And that can't be explained easily in a video, that needs studying and reading.
Though, to understand this one, high school basic math is enough. I mean, we weren't really explained how you sum signals and how decibels work until high school/vocational school/ polytechnic.
Why do you give us all these incredible tricks? I’m so grateful
Everyone said this to me when I said I wanted to start a channel sharing my knowledge.
"Why give away all your secrets"
They were worried because I was their best engineer/producer.
But none of us progress if we don't share what we've figured out.
And there's no danger anyway, the 'secret' is just 20+ years of ear training and practice there's still no short cut for that regardless of how many tricks you pass on!
I've become addicted to the dopamine hit when a video does well. It's like a computer game :)
All your videos are brilliant but this one is particularly clear and elegant.
fancy seeing you here....legend
Been following you since forever. Live sound and recording guy here from Singapore. Many thanks to the work that you’re doing for the world of ours.
For anyone using Logic that doesn't know, you can make any plugin stock or 3rd party plugin mid/side by choosing dual mono mode on a stereo track and changing it from Left/Right to Mid/Side.
I particularly enjoyed your backing music in this one. The female vocals really topped it off.
www.plogue.com/products/alter-ego.html
@@DanWorrall Lol I actually have that plugin myself somewhere with a few anime characters. Never used it though.
I've tried loading it since and results were laughably bad! But on that particular day with that particular mix it was just magic :)
Wow - I was thinking it was a Roland V-Synth!
one of your older videos where you demonstrated Voxengo's Sound Delay, in M/S mode, using very small delays between the mid and side (either positive or negative mid relative to side) really opened my eyes (ears?) to how beautifully you can effect the stereo field and perceived depth of a sound with such a simple adjustment.
From there, as above, thinking in terms of effecting only mid or only side, or each in different ways or amounts is, as you said: Magic.
"Thanks again for" creating.
Always a bright spot in my day when you post a new video. Thanks!
🎯 Key Takeaways for quick navigation:
00:00 🧠 Mid/Side matrixing may sound complicated, but it's simple to implement in code using Reaper's JS programming environment.
02:12 🔊 When deriving the mid channel by adding left and right together, remember to divide the result by two to avoid clipping on the mix bus.
03:08 🎚️ To maintain the same gain in mid/side processing, divide by the square root of two instead of two in both the encoder and decoder stages.
04:30 🌟 Some stereo distortion/saturation plugins offer a mid/side mode, which can significantly affect the sound compared to traditional left/right stereo.
06:10 🔄 To achieve a normal stereo width with mid/side processing, compensate the mid channel by dividing by two in the encoder and the side channel in the decoder.
07:08 📝 If you're a Reaper user, you can find the presenter's Mid/Side matrix code in the video description for easy copy-pasting into your JS plugins.
Made with HARPA AI 👍 Upvote to improve video surfing
I doubt there is many that go around with the level of audio knowledge you have mate. And using your time to share it ❤️ thank you Dan.
I wonder if this explains the 'bug' I always experience in Slate's Vari MU compressor. its MS mode seems do have much less internal headroom.
M/S makes a huge difference. Great tip. Thanks man!
Hey, Dan. The division by root(2) normalizes the linear transformation you've done of the plane, say the (L, R) plane, so that it's a 45° rotation of itself, and we might call the target the (M, S) plane, which is the same thing (just R^2, of course). Written as a linear transformation, it's the 2x2 matrix of 1s with the lower-left entry a -1 (for the inverse, it's the upper right entry as -1), but with the scalar root(2)/2 (= 1/root(2)) as a multiplier.
There are a couple of interesting corollaries to this, namely, that the vertical movement of a stylus on a vinyl record is actually the mid channel (using the root(2)/2 version, that is) and the horizontal movement is the side channel.
Secondly, there is a continuum of rotations of the (L, R) plane that are possible, not just 45°, which could be used exactly as mid-side is used (e.g., rotate the plane by an angle theta, do some processing, and rotate it back by -theta), and might be musically interesting. It would be absolutely trivial to write these transformations for reaper, once you have the mathematics of it conceptualized.
You are absolutely correct! You're describing the Blumlein stereophonic field. Can you please point me to more in depth information/background on this subject? Jim
Big brain post. Thank you for sharing this insight!
❤️
every time you upload bro thats like a second birthday
I love the education! And yes it's only a select few distortion plug-ins that has ms mode on them.
It's quite inspiring on how it affects sound.
Thanks Dan!
Saved this to my private playlist for future reference; great work Dan!
When you hit like, and comment, during the pre-roll ad... Keep doing what you do man, we love it!.. 🙏
Another interesting trick for saturation plugins is to invert the phase of one channel before the plugin and invert it back after the output.
I'm intrigued, what is the sonic effect of this trick?
Polarity, not phase. If you inverted the phase, the signal would be running backwards. :)
@J Right, because that's what Dan's channel is all about, not insisting on details and getting things right, just being content with wishy-washy brushed-the-surface does-the-trick. And for what it's worth, correcting someone's incorrect use of a word is not a grammar lesson, it's a vocabulary lesson. You're welcome.
Phase is correct. First, because flipping polarity results in 180 degs phase shift. Second, because it's common usage and everyone understands the term.
You're correct that I usually try to use the term polarity when that's what I'm talking about, but only because it's more specific, not because it's more correct.
@@DanWorrall I respectfully disagree. Common usage does not make something right. People say Kareoke, Weiner and Janurary all the time, and none of them are correct.
By "invert the phase" vladg meant to say (or so I assume) that samples on that channel with positive values become negative samples, and samples on that channel with formerly negative values become positive samples. Positives and negatives are the literal definition of "polarity". So "flipping polarity" is the correct term to describe "making positive samples negative and making negative samples positive".
The term "phase" refers to the position of an oscillator or sound wave in its cycle, this could be handled in degrees (0° to 360°) or a float value range of 2π (-π to +π, or also 0 to 2π). Take a sine wave, because it's easy.
For the first quarter of the phase range (0° to 90°, or -π to -0.5π, or 0 to 0.5π), the sample value has positive polarity and rises up.
For the second quarter of the phase range (90° to 180°, or -0.5π to 0π, or 0.5π to π) the sample value has positive polarity and falls down towards the center silence again.
For the third quarter of the phase range (180° to 270°, or 0π to 0.5π, or π to 1.5π) the sample value has negative polarity and falls down below the center silence.
For the fourth quarter of the phase range (270° to 360°, or 0.5π to π, or 1.5π to 2π) the sample value has negative polarity will rise up again towards the center silence again.
A cosine is effectively like beginning a sine cycle halfway through, and plopping the sine's first two quarters at the end. So in a scenario where a sine wave and a cosine wave both have the same phase reset, the sine wave will start into its "upward hump" first and go into the "downward hump" later, whereas the cosine wave will start into its "downward hump" first and go into the "upward hump" later.
My point is: at, say, 90° into the phase (i.e. 1/4 of the cycle), the sine has a positive polarity and the cosine has a negative polarity.
But both are at the same 90° position in their phase cycle.
This is like recording a top and bottom microphone on a snare drum. They both start receiving their signal at (as near as damnit) the exact same time, i.e. their phase cycle kicks off at the same point in time. And yet, the signals they record will have inverse polarity, the bottom mic will probably start with a positive transient and the top mic with a negative one, or vice versa.
And we all know what happens when two fairly identical signals with inverse polarity along their phase cycles meet: they start cancelling each other out. Just like adding a sine and a cosine will result in total silence. It's just a mathematical fact.
Sticking to the example with the inverse snare mics, there are exactly two ways to solve the problem.
The fastest, most common and effective solution would be to "invert the polarity" of one of the mic signals, so that whatever was sub 0 before now sits above 0, and whatever was above 0 before now sits below 0. This brings the polarity along the phase cycles of both microphones into sync, both now "go up" and "go down" at the same time.
Another method would be to adjust the timing of one of the recordings, i.e. drag it ahead or behind on the track, thereby also kind of approaching a state where both signals "go up" and "go down" at similar times. The drawbacks are obvious: the adjusted signal starts its phase cycle earlier or later, meaning one signal is shifted in its temporal relation to the other signal. Timing issues, definitely flams and slapback effects, maybe even comb filtering. Because the shifted signal's PHASE is no longer aligned with that of the unshifted signal, while their polarity now is.
Phase and polarity are two related and yet entirely different concepts.
And it's so simple to get it right.
'its not so complicated, really
lets write some code!'
dan you're the best! for real! i am very thankful for the cheeky knowledge you implant in my brain!
Again... Dan is one of the few who can truly call themselves an audio ENGINEER... I've met too many who proclaim themselves to be audio engineers, but lack the scientific understanding of what the hell signal processing actually is, how human hearing and perception really works etc.
But Dan... he's an ENGINEER.
what is crazy is that I have been experimenting with this since I got the non MS version of hg2 as Dan did in the giveaway earlier this year. thank you for the cool insights!
LOVE the ethereal vocals on this track man!
Variety of sound uses mid side a lot of for his saturation algorithms. And they sound glorious imo. (he wrote the Slick EQ saturation code for Tokyo Dawn if you recognise the name btw)
Thanks for signposting variety of sound - hadn't heard of him before but it seem like really valuable freeware. Lots of 'good as/better than paid' kind of comments around it!
@@joecm you're welcome, enjoy! Been using it professionally for a decade.
One of the best transformer models out there. Probably only 2nd to fabrice's work with slate imo.
Very subtle stuff if you haven't got the ear for it but just very good 'flavour' gear, free or otherwise.
This channel is genius and I was getting back to programming...so this is actually beyond perfect as I can do some audio coding!! And as I am a neophyte on mid side....it will be a brainiac exercise par excellonce!
You're amazing.
Just when I think I’ve got an okay grasp of mixing concepts. Dan releases a video.. Thank you.
I did precisely this with JSFX years ago (including compensating the volume using the square root of 2 so I could use a single plugin for both).
I'm rather chuffed to see something I worked out myself in a video from someone I respect so much.
Thanks
would love to see an in-depth tutorial about M/S compression with an emphasis on not just what it is or how it works but *why* you'd want to do it and a few demonstrations of its possible effects. Also--the backing track is your best tune to date, IMO.
No better way to start a sunday afternoon then with a bowl of cereal, a spliff and a new dan worall video.
Didnt expect to get a lesson in coding but it is what these options run on. Thx for the in depth review.
At first I was worried I was gonna have to save this video until I got into the studio to hear the differences but even on a phone this makes a huge difference and another tool I look forward to trying to exploit on my next mastering session! I’ve used lots of MS on compressors and eqs but never saturation, never really came to mind
As always, perfect timing. Just ran my new culture vulture in m/s, it's incredible
Dan’s videos make me happy, part 2
Voxegeno the best plugins ;)
or you could just spring for the Blackbox HG-MS plug. LOL, just bustin on ya Dan. You're the best!
Mid-side is easy in FL, using Patcher, and I pretty much use it on every sound I want wide. Fruity Stereo Shaper does a really good job with that.
Ableton's 'Utility' plugin. Right click over width, and mid/side becomes a ratio.
Waves StudioRack. Create parallel channels. Toggle the setting at the top of the channels (stereo, l, r, mid, side).
Midside is my GOTO since years, not on all songs for sure but the ones i mid and side and then even sidechained the sides with the mid as the input 😂 all those tracks really sound insanely good on my IEMs
Mid side needs to be a lot more mainstream than it is! Thanks for pushing the ball a few more yards down that field Dan. The amount of mixing I can accomplish just pulling away junk with ProQ 3 mid-band cuts continues to amaze me.
thats cause the pros know using Mid side eq on everything ruins your sound....
I love that this starts with you writing a plugin _in the DAW's built in plugin editor._ Try that in any other DAW but REAPER.
That was really beautiful - I have no idea why this coding example has helped me understand mid/side, since I do not ever code.
Love your thoughtful explorations, as ever. The vocalist on the music you used for this tutorial has a clear, soaring voice that reminds me of Annie Haslam. Who is she, and what is this music, please?
Music: check the description for a bandcamp link. The singer is here: www.plogue.com/products/alter-ego.html ;)
A synthesized vocal? Holy cow!
Note that Reaper already comes with m/s encoder and decoder in JS by default.
Wonderful piece you wrote.
Would be cool if you could teach us how the "focus" and "field" stuff in the Mr.MS plugin works. It seems really interesting, making side information from the mids and vice versa, but it's only avalible with I-Lok so It'll sure as hell not end up on my PC.
This is genius how have I not thought of it before!
Thank you Dan. I've used the MS JS plugin that already comes with Reaper but I'm surly gonna test yours. Much appreciated.
As someone with a modular synth, Mid/Side is a great way to use two different analog synth filters on a stereo source without running into accidental panning issues (because e.g. you might have two different filters instead of two perfectly calibrated ones of the same type).
Been waiting for this, thank you sir!
Is Dan Worrel the Jimi Hendrix of DAWs? Seriously Dan, we love your stuff and IMO it is next level. Thank you so much for sharing it. FWIW lot of us speak code. It will be like knowing how to use a compressor in the future.
Dan thanks a lot for this, only you would be able to explain this in such simple way and at the same time include a big nugger in the middle of the video. you rock for real.
in reason i have a ms matrix using the two Thor synths lol. Nice point about where to put the gain compensation
Interesting that you used the Black Box HG-2. I wonder if the MS version of that does any gain compensation for the side channel? I confess that when they released the MS version of that plugin, I wondered what the point was.
the M / S technique is very interesting and very important. Very important
Really like the song you wrote for backing this video, I think you've used it before. Either way, it's a keeper!
I split the audio into a mid and side channel before it goes to the master bus. So I can process the mid and side signal separately with any plugin.
What plugin do you use to do that?
@@UPPERKELLER Isol8 (TB Pro Audio) - it's free
dan, man these videos are gold and the depth of your understanding is unmatched, where have you learned all this? have you figured it yourself or was it through school? i really need to know
No schooling. I dived in and learnt on the job.
I found M/S very early in my career, when I would try anything. Many of my recordings are either only an M/S decoded pair - usually U87 - or traditional multitrack with a featured M/S pair. There are countless advantages along the way, like a perfect center channel whether or not I have the time to make stereo. The ability to edit the microphone ‘positions’ after the fact is as close as it gets to a free lunch.
Seriously Dan, please open an academy online and just TAKE MY MONEY!
No , stay here,,I can’t afford expensive courses.
No what he's doing now is great enough
Ditto
🤣🤣👍🥇 No better way of spending your money under the sun.. better.. ever!!
If you want to give him money you can just buy one of his t-shirts or leave generous $ comments. I’m sure you mean you want a deeper course explaining everything so you are an amazing mix engineer right away. Unfortunately years worth of hard listening is required to be a good engineer, regardless of the information someone else has told you. That said, i would definitely enjoy a deeper dive series like that.
Dan once again pushing the audio engineering boat out to new and beautiful places
I don't think he's pushing it to new places as much as he's showing how low the bar for an "audio ´engineer´ " has been for a long time... it's so often self proclaimed title thrown around by people who do not fill the requirements to call themselves engineers to impress their clients and bill more money from them.
I've always hated it, for example I've met so many high earning guys that do not have a clue how compressor really works and what happens to the signal and how it's distorted by different adjustments on the front panel... they just wing it and by trial and error they find something "pleasing" without any clue what they're actually doing.
To be an engineer is to be well read in math, physics and to have been spend those ~4 years after high school in higher education and to have gotten a degree. Or at least have all that knowledge expected.
If you do and ask a self proclaimed audio engineer "to explain what does Nyquist Shannon Sampling Theorem say about AD converting a signal and DSP" and they look at you like "a what now??"... they're full of it. It's basics and fundamental theorem that should be understood by an Engineer. Among various, various other concepts around signals and waves.
Good audio engineer also understands basics of electronics (on component level, how to make circuits and how different analog circuits work) and programming (because of DSP) and because 99% of audio is about electronics recording, altering and reproducing signals. If they don't know what "ohms law" is, or "draw me rough sketch of how mic pre-amplifier works" or "if i want to break down a sampled signal into it's constituent frequencies, what is that process called?"
Almost any true Audio Engineer can do these things... they might not remember the details or be specialized on mic pre-amplifiers or DSP as such, but these are the basics that you need...
I know all this stuff. I'm electronics designer, AV technician, programmer, AV service technician etc... but since I haven't done to work and gone through the higher education, i restrain myself from calling myself an engineer.
And so should everyone else if they haven't done the works :)
Thanks. Grateful as always for your sharing.
Ow, I missed the part, that this is the old style Black Box, the newer version has M/S on it. I always remove the toolbar on the bottom of the plugin, mostly to ignore that TMT rubbish, especially for mastering plugins that TMT crap isn't helping at all. That said, I like the Black box very much and in M/S mode it does real magic. Thanks, Dave, for pointing this one out to the Reapack stash.
the other day I was thinking about how with balanced audio you could make a passive encoder and decoder, i've been meaning to do it i just keep forgetting, def making me want to though
Holy s*** I've luckily never needed to use the Enc/Dec in MSED, I have MS plugins. I've had it for many years, of course because of you. lol But no where in the Manual does it clear up that the Side channels would be more quite than the Mono. Wow, & that's why FabFilter approaches you for them tutorials❤🔥. Loved how you said at the end "or adjust it by ear like a normal person". 👏(I'm always adjusting either way haha)
But which side are you going to take? The mid side or the side side?
Never thought I'd be getting a basic coding lesson in a music production video but here I am.
M/S is one of the final frontiers I'm trying to tackle in my mixing repertoire. can't quite wrap my head around where it fits in my processing chain and why I would use it. I know what it does, but it's not quite intuitional yet. hope this video answers some of those questions.
Thank you Dan! Your knowledge is huuge!
So much of your material rewards revisiting
Shoutout to Mid/Side for being easier to work with in general than L/R (unless you're mixing only acoustic audio/instruments in which case L/R is specifically very useful).
I'd like to also mention Mid/Side expansion (or compression) for being very useful on all sorts of things, gating the Mid of a reverb bus more aggressively than the Side can be a useful way of making room for other elements (as is expansion/gating reverb in general).
My head is spinning; will have to watch this a few times, lol.
Thanks professor Worrall! :)
One thing that has been bugging me about mono summing is that you will get a cancellation. What I have been doing as an experiment is to isolate the "side" bit, invert polarity on one of the channels, and THEN sum it. This lets you keep that but that usually gets cancelled.
You're adding the side channel to the mid channel = the left channel.
Don't overthink it. The sides are supposed to cancel in mono,: it's literally the difference.
@@DanWorrall I find that in mixes with double tracked guitars, doing this keeps the guitars sounding equally loud in the mix VS vocals and drums etc.
I never have a need to sum to mono, so it's honestly just a stupid little technical pet peeve of mine :p
@@DanWorrall when I first discovered mid/side EQ i tried doing somewhat surgical EQ on double tracked guitars, separate treatments on the mid and side parts. Then one guitar stopped for a part and only the left guitar played for a bit. Dear god the resulting sound was abhorrent.
Think of the cancellation as an advantage! When there is only one speaker channel, it's easier to over clutter the mix. If you place non essential tracks and effects toward the side, they will cancel in mono and leave room for the more important sounds.
@@sparella in a bunch of genres the presence of the hard panned stuff is very important, and the cancellation is a bad thing. so it depends on the context and perspective.
@5:49 How much gain should you add to the side channel for it to be the same? Or is it best to just judge it by ear? The distortion comparison at the end, on the full mix, is really interesting, really gets you thinking of how we can use this to our advantage! Thanks
I’ll be right back; let me go enroll in “Learn to Code” 101 before I can understand half the stuff Dan talked about here. Great video Dan! Thank you for your insight and knowledge.
Really cool track ! (And very interesting content as always)
You could still have just one MS encoder/decoder plug-in if you added both encoding and decoding modes plus some way to switch between the two. It would be a bit more code, but for such a simple plug-in it wouldn't make much of a difference in terms of size. Also, it would be great to have MS processing as a feature for all plug-ins in Reaper, just like the delta-solo and oversampling features!
Well, both encoder and decoder exist as stock JS as well (though these use the sqrt(2) method by default) and I've used them a lot because you can always route side only or mid only through any effects and do so easily in Reaper. The same is true for multiband processing, where you can use the band splitters and band joiners (3,4 and 5-way as stock JS, but from there you can easily work out the codr for more bands). I got an FX chain preset with just the mid-side encoder and decoder, with linked gain for the decoder and encoder, so I can split up the 6db of gain reduction between encoding and decoding any way I like with a single control for mid and side. Then it's just a matter of moving and plugin I want to operate mid/side between the two.
@@simongunkel7457 Chekc out Airwindows EdIsDim, GUI-free VST/AU pair that does this. Also Airwindows MoNoam which is a mono-ized mid/side which sometimes works even better than mid/side encode decode pairs. Just makes doing this super-simple (and has some additional combinations/trickery built-in) and easy to experiment various combinations and settings.
@@simongunkel7457 you should paste the text for others
Love you Dan! You're my kind of engineer
Love the video, gave me a laugh for the coding. If you can tell me why you decide to code, I would love to know
This is why the vertigo and black box are sooooo goood
Pardon the ignorance, but can one achieve the same by the parallel routing method? 🤔 My DAW is FL Studio, so I'm not sure if we have the same options when it comes to the mixer. I often multi-route a given input and adjust the outputs on a pertrack basis, and say add grime to half a signals frequency range while doing something completely different to the other. I have a plug-in that does Mid/Side, and the only caveat is that it utilizes one of the send channels to split the signal instead of a standard mixer track. But I don't really perceive the difference. I'm interested in understanding the topic in more depth though if you can suggest some literature. Very cool demo, regardless 👍 Thanks for sharing ✌️
I don't know FL Studio well enough to comment on the mixer options, but if you download the free MSED plugin from Voxengo you can do MS in any DAW.
@@DanWorrall Very cool! Thank you! ✌️
Yesterday your "Delta" bus saved my life 😁
Can't wait to see this video 😊
Brilliant video. I always thought mid/side was some kind of audio voodoo that only engineers who wore aviator sunglasses and drank mead could comprehend. Thanks!
Thanks mr Worrall. impressive skills. my solution would employ a bit of algebra with a level variable to solve for the actual ratio of relative gains to be encoder and decoder forward/backward compatible.
I don't think I'll ever go as far as to using code for anything even though it seems very interesting. So, I think I'll just stick to the Voxengo MSED device for now. 😊
I need more lessons on m/s mixing, Dan, if you haven’t made a beginner tutorial on m/s processing,pls make one! I need to know how this works.
Holy... It makes the diamond a square in the goniometer if I am not mistaken? Never seen that before. And what does it mean?
It means it sounds different :)
@@DanWorrall as usual someone at some point has to state the obvious I assume :-)
I will for sure try this out.
Thank you, Mr. Worral
Useful information as always, thanks! Also liking the track in the background alot😃
Very interesting video. At 6 13 you declare left as spl0 + spl1 * 0.5, this would mean you are adding half of spl1 to spl0? Shouldn't that be (spl0 + spl1)*0.5, because multiplying goes before adding? Sad my DAW has no scripting engine, that would be very exciting. Thank you for what you post, it's always a welcome brain exercise.
I want to scale just the side channel, which is spl1, before I add to or subtract from spl0 which is already scaled.
@@DanWorrall Okay, thanks. I just wanted to know for sure.
Makes you wonder why Elysia are (as far as I know) the only ones to offer a LR / MS analog saturator (the Karacter).
Thanks for the info, and the Javascript 101 class too ;-)
Dan, in the last screen shot of the code that you showed, you left out the parentheses, so the multiplication by 0.5 would not, due to operator precedence rules, be applied to the sum. Otherwise, thanks for another great video! And, I use the black box hg-2ms, which I love.
ua-cam.com/users/shorts_VrjNQnydWg?feature=share
;)
But wait... What if i want my side information unlinked for left and right sides? So left side, middle, and right side channels?.. Hmm... Never thought about it before.
That's neat Dan, the mid/side saturation tip is definitely something I will keep an ear out for. But instead of tweaking the encoder/decoder maths, couldn't you just treat the mid and side separately in mono saturation plugins and use the gain knob to achieve the same effect?
Yes
What free vst would you recommend to achieve this level of simplicity?
@@AlchemicalAudio any plugin can be set up with two instances processing mid and side. How simply will depend on your DAW: it's easy in Reaper if you know how.
Sometimes when you press mid side on plugins it puts the stereo on one side and the mids on the other, and some with the encoder Pre built. The Manual never really tells you which one it is
Re the difference between polarity and phase. Am I missing something?
A phase shift of 180 degrees, produces the same result as inverting the polarity of a sine wave. But only a (single sin wave), sound is nearly always not a single sine wave, and nearly always inverting polarity will not have the same effect as changing phase by 180 degrees. In a 'real' (not single sine wave). representing sound, the wave is complex. Fourier transform can equate the complex wave to many sine waves added together, how do you change the phase of that? Each component sine wave of the Fourier equivalent will have a different frequency.
In other words if you have a 'real' complex wave invert the polarity and add to the original it will null. (Principle of superposition).
You will not be able to null it by reproducing the original wave and shifting one of them in time.
Thank you for demonstrating, i think Vertigo VSM-3 offers M/S on both 2nd and 3rd harmonics. Is it a good idea btw to bass mono your master? (I try to do this in my seperate tracks but have always wondered).
Yes, but don't go too high: I like to have stereo content right down to 100Hz and I consider that important. And watch out for phase shift changing the stereo image, see here: ua-cam.com/video/uZ9WQDojQt8/v-deo.html
@@DanWorrall yes thank you, I read that for lower frequncies it's best to EQ with linear phase eq's to avoid phase shifting?
@@creatingspacesproducer complete nonsense, sorry. Low frequencies are where linear phase pre ringing is mostly likely to be audible in fact! You almost never need linear phase EQ. Don't fear the phase shift!
Monoing bass frequencies is one rare exception: linear phase can do this without changing the stereo image. It's still not always better though, sometimes the change from the MS phase shift is good.
@@DanWorrall ok tx, i saw it here and thought it was interesting :) ua-cam.com/video/V6qL9JrSQkQ/v-deo.html
Waves drives me nuts for a number of reasons that I won't get into, so I stopped using their plugins. That said, the regular availability of M/S was nice. I'll especially miss the Omnichannel which had a lot of control over what was M/S or L/R throughout the channel strip, including saturation, which I used often back when I could still stomach Waves.
Thanks for video, please do Kirchoff EQ review, would like to know if it snake oil or real Pro-Q killer.
Great educational video. Actually the very simple matrix (L+R)+(L-R) =2L and (L+R)-(L-R) = 2R and we get louder channels once we know it, presumably we can manage it