As a young man I immersed myself in music, mixing and production. I'm now middle-aged and wish I'd undertaken something less complex like rocket science.
The reason why audio is harder is that we're dealing with something that is both tangible and intangible at the same time. Rocket science, you can rely on hard numbers to produce a predictable result. Audio, move the speaker slightly, and it messes up everything and you have to do everything again from the beginning.
I’m always in awe of Dan’s ability to meld his technical and artistic talents into these intimidating displays of audio engineering skill. I’m an engineer and a musician, and to me it feels like he has the analytical capabilities of the most rigorous scientists combined with the empathy of the deepest creative. And then he’s generous enough to share lessons like this with us. I share this channel with anyone who will listen. Thanks Dan.
He’s absolutely right, but he’s only proving a point, whilst missing the bigger picture: The majority of people today listen to music with headphones. Only a tiny percentage of that majority listen on headphones that are worth a damn. I’ve been in the game for 25 years and mixed for several A-list artists, and I understand completely… but if I were smart, I’d probably just save my money and time, and mix on Apple wired earbuds, because at the end of the day, that’s still what the majority of people use, and this fact and trend isn’t going anywhere. The days of audiophiles with large, expensive gear, with large expensive speakers sitting on pyramid points with little slabs of slate under each point, are over. In 10 to 20 years, that generation will be dead, and so will the days of the home living room audiophile listener. Don’t even get me started on car stereo systems 🙄 There isn’t a seat in the car that comes close to providing an accurate stereo image. So if we were all smart, instead of being purists, we would mix on those Apple earbuds, and check our mix in the latest, highest selling, shitty economy car, and let the past go. It’s the hard truth, but it’s the truth ✌🏻
There’s a part of me that realizes that this is incredibly insightful and important, and then there’s the part of me that doesn’t quite understand anything Dan talking about. I’ll have to watch again tomorrow.
I have been perplexed by this exact phenomenon for literally years with zero understanding of what it even is or how to properly describe it. That sense of "the sound is tiny and inside my ear", or just otherwise poorly placed, but with seemingly no way to put it where I want it. Randomly stumbling upon this video and hearing the moment you enable the phase shift was like having the universe's secrets laid out before me. Absolutely stunning.
@@leaveitorsinkit242 I'll take a stab at it. The speed of sound is roughly 1ms per foot, so if you delay the sides by 1ms (millisecond), it will sound like it was recorded from 1 foot back from the mids. It helps give it some separation, specifically depth. It is essentially shifting the phase of the sides in comparison to the mids. Anytime you introduce a tiny delay, it shifts the phase slightly. You can drive yourself crazy messing with the phase correlation and making sure the phase is "perfect", but in reality you don't really want it to be "perfect", you just want it to sound good and make sure that it's not canceling the frequencies that you want of the instrument.(fully understanding phase cancelation is critical, and what I consider the quintessential intermediate mixing knowledge) If you mic up a drum kit with close mics, overheads and room mics. All of these mics will be getting the signal at slightly different times, and that is what we want, otherwise it would all sound like it was a bunch of mics the same distance away and wouldn't sound like a 3d space. It's the subtle phase differences that allow our ears to perceive the distance involved and therefore depth. Of course we take great care at the time of tracking to make sure the phase isnt causing cancelation, by slightly moving the mics closer or further away (inches of difference) until it sounds full and rich when listening to all the mics together. If you went through and manually lined up all the tracks where all the peaks and valleys were perfect, than the phase would be "perfect " but it wouldn't sound like it had the space that you were going for when you put up all those mics. It would actually ruin the effect. So knowing that, you can artificially recreate this effect by using short delays, like 30ms and below, remembering the 1ms=1ft rule. Those types of delays can help you shape the soundscape from front to back, along with many other things. But keep in mind that you will only hear the effect of the delay spatially if you are listening to the delayed track in reference to a same or similar un-delayed track (just like the sides are percieved against the mids in this track because a lot of the same information is in both the mids and sides). A delayed track on its own will only sound like it was played a little later, but heard with the same signal un-delayed, it starts to change the spatial relationship. There are also creative effects with short delays. Look up a video on the Hass effect, where you can actually make something sound like it's even further left or right than hard pan LR. That trick has been used in many famous productions and I feel like it will help you understand what this trick in the video is doing, just slightly more exaggerated with the Hass effect. Phsycoacoustics. Yay! Hope this helps, 🙏 👌 have a great time on your mixing journey.
@@aholder4471 just to confirm, if I went into audacity, and delayed the right side (assuming I planned both channels left and right) by 1ms, it would provide the Illusion that the soundstage is wider?
@@TimberWulfIsHere Yes, but he is using a middle/side feature of a plug in and delaying the sides by a ms in this video, not left and right. If you went in and delayed the right channel of the entire mix, it would honestly just mess your Soundstage up and probably not be what you're looking for. I would leave the delay of left vs right to individual instrument tracks, not the whole mix but hey, it can't hurt nothing to try it. That's the best and worst part of digital audio, you can try a bunch of stuff and not have to commit to anything.
@@aholder4471 I just played around with it, it actually does work, but i can also see what you mean; it does sound ever so slightly out of sync for some sounds at least.
I'm a developer for FL studio mobile and this video inspired me to add m/s delays to it in my first week of working there - just thanks to this video. Thanks Dan 🔻 Exit: talking about the spacer module in flm
Personally ive never understood this "no one listens in mono" argument. Even if your mix is never played back in mono, using M/S monitoring to shift density and dynamics makes a worlds difference in getting a mix that is perceivably "wide" but maintains depth and focus.
exactly, but i do understand those who don't get mono mixing. Because the newer generation that do their work from home, only use stereo mix. and then its up to the one who makes the master of the song, to do the mono lvl'ing (as i would call it, when its only mixed in stereo speakers :) )
but then again many newer headphones that mainstream users buy are fake stereo where there is a driver that spread the sound between the 2 mono left right speaker so it sounds "true", and by true i mean massively coloured and therefore not the true mix you are hearing :)
There's another thing that's AMAZING about this technique that might not be so obvious at first: Mono-compatibility. Utilizing a mid/side channel delay will NOT affect the timbre OR level in mono playback at all (give or take a fraction of a ms of delay on whatever you apply the technique to). This is because when converting to mono, hard-panned material will not change in level, and the side channel will disappear completely. This is unlike an LR imaging technique like Hass delay or LR filtering, which will introduce a sort of flange/comb effect when you convert to mono. Genius!
Yes indeed! I was initially flabbergasted about how this doesn't screw with mono-compatibility at all. I'm still trying to wrap my head around what I'm hearing exactly, and why. I mean I know what M/S processing is and how the encoder/decoder works (which is simple enough in itself), but I still find it generally quite counter intuitive how things behave sonically when processing them separately. I suppose the M/S terminology is already confusing. Sum/Difference captures the meaning of the process more accurately imho. But anyway, after playing around a bit with delaying the mid-channel, I wouldn't slap a mid-channel delay on the mix-bus at the end of a mix session as it screws around too drastically with some very specific mixing choices I made. But this technique certainly opens up some very intriguing possibilities when applied with focus and creativity! (as Dan demonstrated extremely well)
It is absolutely incredible the fact that you are handing out knowledge of this depth for free. This is specially great for vocals or any kind of non-digital instrument that one may record on studio. Amazing stuff man, keep it up.
I have been making music in DAWs for 20 years, since I was 12. This video finally explained to me and made me understand, what the hell a sound engineer's job is, and why it's different and separate from what I've been doing.
@@bluestringmusic Correct. I've seen hundreds of "tutorials" on music production to this day. Most of them hardly taught me anything. With this video however, I feel like I just went on an experimental trip. Tons of great information here, something many do not include into their videos. This is QUALITY demonstration, not some over-the-top animations, 10 second cuts of the work process or time-lapses. Some people also like to just slap bunch of filters on their mix, while they're convinced it's always the same process to mixing and then they call it a day. I wouldn't know delay made such a profound difference, or how the channels are encoded after panning hard left, right, mid. It feels good to hear a difference, but also UNDERSTAND why there is one.
I have over 10k in home theatre, studio, pro audio, and simple Bluetooth speakers everywhere in my house. My wife still listens to all her music on the phone.
5:30 was still a tad difficult to pick up on, until you solo'd an element out, thanks for teaching me something I always felt but didn't know how to acknowledge, or label
Well this was really interesting and mind-expanding. The clickbait title was answered within 90 seconds, but as I heard him expanding on what you can do with it once you know that, I couldn't look away. This seems like the result of decades of engineering experience, the sort of stuff I'm not even sure Alan Parsons was fully aware of and utilising. Add to that the fact that all this improv source material is by the same guy. Absolutely brilliant. I will have to watch this a couple more times before I fully understand it, I think.
Wow! that was a real metaphorical ear opener! I'd listened to the video a couple of times and didn't get it, but when I actually tried it in one of my sessions, the difference was stark. Actually I still don't understand why this effect happens but it has blown my mind - or perhaps that's why! Many thanks!
Fantastic video! I actually used this very same technique to make a mono-to-stereo, fully monocompatible processor (i made a reaktor ensemble). It injects mono signal into the side and delays it by an amount chosen by the user (small amounts btw). I discovered that this works very well also on stereo tracks, not only mono ones! By watching this video i realized that dalaying the mid channel is SO COOL! Also the all-pass is a very great idea! I sould modify my ensemble to make it a powerhouse filled with Dan’s knowledge 😀
When I like something, I want to know how that thing/process works, and most of the times that leads me to the deepest roots of that thing (even to electrons or quantum theory). On that journey, besides finding new ways of editing the final product, I also find new ways of exploring the creativity/artistic process. It gives me new posibilities, new ideas ... it widens the horizons ... It is so beautiful to learn about how things work. We are so lucky to have Dan Worrall sharing his knowledge with us!
When I mix to stereo (I'm moving more toward surround) I feel that it's absolutely necessary to have motion. Some parts can just sit where I place them in the field but other parts really need to travel. I've noticed that some note sequences gain emphasis through motion, like as if our hunter instincts are trying to pin down their location. While other parts can be de-emphasized by scattering them. Some notes need to start in one place and reach completion in another. I frequently automate 'Wider' to send particular parts out or in. I seldom use algorithms, like LFO's, to produce motion but perhaps I'll begin more to follow your example at the end of this video to give static parts more of a living pulse. I certainly agree with your contention about headphones with the addendum that it's utterly impossible to keep our heads absolutely motionless unless one were to painfully clamp one's head into a fixed position. Perhaps this is another reason why headphones contribute so much to ear fatigue. I like to watch people's heads move as they listen to my work.
I use Melda's MUtility for this kind of stuff, because it's able to do a lot of things besides what's on this video, while still remaining free; do haas, do invert stereo, mid delays etc.
This was really an outstanding video! Thank you DW. Assuming high quality headphones compared to high quality stereo speakers with both powered by quality electronics with a high quality music source: IMHO I prefer speakers. I'm just a retired 70 year old music lover. My standard of reference was always LIVE music. Particularly un-amplified music. Nothing beats the full-body open-air experience. That being said, headphones may allow you to hear a particular detail that might get lost in a speaker setup. But, as DW points out, they are not stereo. And that's fine!
I've learned to watch all of Dan's videos regardless of if I think I care about the topic... (what do I care about LCR mixing?) -- I always end up learning a ton of fascinating general audio art and science. Thanks Dan.
@@TjMoon91 Dude I got a masters degree in Music Production at Berklee College and we didn't even scratch the surface of this kinda stuff Dan explains so eloquently. Could've saved me a boatload of cash!
Best UA-cam homepage recommendation I've gotten in months, how had I never come across this channel before?! Immediately subscribed, your content is just what I needed to nerd out on!
i've been doing all sorts of ITD and HRTF panning in my work for some time and i'm happy to see something along the same lines here - f the phase police indeed, 2-channel audio is a playground
Brilliant title, I instantly knew I'm gonna learn something new here. Well done. As a side note I do love hard panned instrumets on headphones, theres nothing unnatural there for me.
There are people who got this recommended, but if you know how to set up your stuff, this demo is kinda "anstrengend" to listen but also gold. True understanding begins with listening^^
Fantastically explained, i've noticed that on a lot of vintage songs the panning is hard left and right. More modern songs tend to use the special stereo headphone sound
I have absolutely no frickin idea what you're talking about most of the time. This is why producers with a creative bent need engineers with technical expertise. You could probably do my job and a tech's job no problem. Incredible.
Been watching different mixing, mastering and processing channels and none of them even mentioned that dilemma. Glad I found out about you via FabFilter and I admit I stick to your content, because it's easily explained, well represented and good to recreate. Thanks Dan, keep it up 😊
Sometimes listening to Dan is like listening to a warlock explain the dark arts. That delay effect is so drastic and sounds so good yet so easy to achieve, souls must have been bargained with to learn the trick! 🤯
Sooo. After watching the video 5 times probably with dif headphones (on the gaming ones evn tough its panned straight right i was able to hear in on the left so i had to figure that out) i think i finally completly understood the whole thing. Absolutely mindblowing! Especially the compatability that comes with it. Crazy
If I watch more than one of Dan’s tutorials in a row I need to have a nap for my brain to recover. Amazing content. I think I am done with all other tutorial channels.
This is a game changer! I've always hated hard panning my tracks because It felt unnatural, yet I always heard people talking about LCR mixing. This filled in that gap I had in my understanding and I'm excited to play with this new knowledge. Thanks a million!
since this video ive now incorporated Volcano3 into my mastering chain, and use it simply for allpass filters. 2 of them. rotating phase around selected frequency really is a game changer for mastering. ive gotten so many mixes that i feel are phasey in some way or another, and with this new found knowledge i can correct that fairly easily. so far ive not noticed any negative side effects from doing this, only very positive ones. such as increase defintion in the lows, and reduce phasiness in the stereo field. So THANK you dan, for this information.
The most natural trick I personally use is utilizing some Early Reflections. It somehow emulates the natural room and brings the hard panned elements into a real space. I also tend to give the other elements of the mix the same Early Reflections so that the whole mix sounds more cohesive. Amazing video BTW Dan!
I am the same, I like having a small space reverb with mainly ER (like a studio convolution or something) and feed everything to it to fill in the distance between instruments.
I personally love the un naturalness of hard panning headphones. I mean it’s easy to go over board but it’s also easy to make something sound really interesting.
Done as an effect, it’s fine. But the problem is most stereo mixing comes down to a choice of what equipment to cater to. When it’s done poorly, it’s really jarring to me. Loads of old stereo stuff clearly wasn’t made for headphones. I wish there was a quick correction for it at the music player level.
@@CosmicHippopotamus old stereo stuff from the 60s for example where the drums are hard left, bass hard right, only vocals centered or whatever. really jarring
Man, this is brilliant. I'm not going to pretend I understand the nitty gritty, or what should sound good or the technique. But I know for a fact that the basis here of phase effecting how you here panned stereo is easily applicable. I don't need to understand all of it right now to know just being aware of this will improve my stereo mixes. Tell you what, helps that you make these videos like you're a mystery youtuber talking about an unsolved murder. But instead it's an indepth explanation of audio engineering.
This LCR in headphones problem has always bothered me! Especially because I also often work with mono sound sources. Generally my go-to fix would be to use plugins like Ozone Imager or ThrillseekerXTC to stereoize the sound slightly. But I love what I'm hearing here and can't wait to try it out.
Presumably the reason delaying the centre is potentially more pleasing is because in nature the absolute phase of a signal originating on the centre line is more or less irrelevant (depending only on distance), while signals originating off centre always arrive at one ear before the other, so that the lateralised hemibinaural (is that a word?) percept is naturally available before the spatially integrated one. Indeed, that's how your reflexes know which way to spin your head when you're startled. I'm not arguing that delaying the centre corresponds directly to any real-world acoustic phenomenon, just that the lateral signal arriving first is more “normal”. You can probably get the opposite effect naturally, too, occasionally, but you might need to stick your head in a pipe or something. Many thanks, incidentally. I had got halfway there screwing around in Eurorack, but you gave me the impetus to think about it carefully.
Quick Tip: If you are trying to do the Voxengo Sound delay phase in Abelton Live, don't pan the whole track, put in a utility before the delay and pan that. The pan on the actual track will affect all of the audio in the effect chain, and it won't work.
Wow. One of the most educating and enjoyable videos I’ve watched in a while. The production quality is beyond this world!!!!! I’m not surprised that an audio engineer has that good of voiceover audio, I’m just flabbergasted by how much effort you put into your videos!
Hey Dan, did some research regarding headphones earlier this year - the answer I came to for classifying the how headphones and stereo speakers interact with our ears was the terms "Diotic" and "Dichotic". Dichotic meaning that both ears hear different stimuli, and diotic meaning that both ears hear the same stimuli. I posited that dichotic and diotic properties could exist on a spectrum (how different sound is at each ear), and also that it was a statement about the physical properties of listening via loudspeakers/headphones rather than production qualities. With stereo loudspeakers, hearing in a diotic manner is never possible, as naturally you note that the head (and pinnae) shape and change audio flowing into the contralateral ear from the left or right speaker respectively. Thus listening 'in the open' (loudspeakers) can never truly give both ears the same exact stimuli (which is part of our binaural localisation system for placing sounds in a 3D environment). But It also may come down to how accurately one wanted to measure the differences in both ears to measure (unsure if currently a word) 'dioticity' (how similar your ears hear any given sound?..). And whether the properties of dichotic and diotic should be restricted to properties of the physical realm (listening device), or if they should be considered holisically - including production considerations (i.e. if mono audio). Hope to have provided something to consider! Keep the videos coming 😁 EDIT: Oh and also yeah, I personally (for whatever that is worth) consider stereo NOT to relate to whether something is binaural, dichotic or diotic. To me, stereo is strictly about having two channels of audio. I do feel that we need a more expansive vocabulary for discussing the differences between headphones and loudspeakers, not suggesting that this is squarely your responsibility of course haha! Hopefully this may help spark thought in other's minds though 😊
Yes, when he says it's "pedantic" to call headphones "binaural" and speakers "stereo", I'm not convinced it's pedantry. I think it's actually just wrong. :-D "Stereo" is not just used to describe two-channel audio, it's more of a medical or scientific term to denote that something has correlated pairs of stimulus. Think of 3D glasses, for example. There's a vernacular implication that when you say stereo, you mean audio content that has a soundfield presented with two channels of audio. Contrast that to dual mono, which is just two channels that may or may not have anything at all to do with each other. Or binaural, which I've always taken to convey something a little harder to describe -- like a stereo presentation that is meant to convey 3D space the way your ears would perceive if they were _in_ that space. Suffice to say, I don't think stereo vs. binaural has anything at all to do with the mixing of audio in space on playback. If it were used in that context as a clarifying aid for the purposes of discussion, then fine. But I feel a blanket statement like "headphones are not stereo (they are binaural)" is probably verging on a fallacious statement for the purpose of being sensational.
Hi Dan! I keep coming back to your channel. This is supreme content. I am by no means as educated on the topic of sound as you are. But being able to learn why my mixing skills suck is kind of priceless. I have watched and listened to all of your videos on this channel (the fabfilter ones as well) And i got to say: Everytime i consume your content i get amazed. Spot on. To the point. And most importantly why! Thank you for the awesomeness!
What an ear opener! Great tutorial. Tinkering with the Voxengo plugin I'm finding you can get sounds to feel like their reaching for the backbone of the mono Bluetooth speaker. Even though the sense of width isn't there as with earbuds, it's fascinating how you can achieve depth even in standalone mono Bluetooth speaker and play with that. Interesting also was combo of Voxengo and a Limiter in depth perception, I can't explain it but yummy!
So now at last we get to know how you get the depth in the recordings you've used in your tutorials. As you've hinted at before - phase is your friend. And like many people who've commented here, I'm going to watch again straightaway.
OOH that last one looks extremely useful. I might never use it across the entire master, but using it on purely ambient pads and backgrounds could add a whole layer of depth.
I started to understand this more having recently used Boz Digital Labs Pan Knob with both the delay and X-over pan functions enabled. Brilliant plug-in. Had no idea why it was doing what it was doing and it kind of led me to this video. Thanks as always Dan. Such a good explanation
It'll take me a couple of watches to digest this but I already know it sounds great. Can you use this technique on channels that aren't hard panned but that are moved just left or right of center to bring them back into cohesion when listening on headphones? I don't usually pan very much at all because it grates on me even when listening with stereo speakers but this could help to form a better image perhaps?
Absolutely blew my mind. You are a fantastic presenter and you paced this incredibly well. As I was going to share this with someone, I was shocked to realize that I was nearly at the end of the video. The flow of the video is smooth af.
I love this stuff! Dan’s great, and if we all just appreciated having the input of an experienced sound engineer for free or very little money, there is no argument. And if Dan did not have a point of view of one flavour or another, it would make him rudderless just like it would make anyone rudderless. Once you have a POV you can begin to work from there, and even account for your own bias in having it. I have come to the conclusion about binaural, that it’s superior to stereo, and in fact, is what early on people may have expected or wanted, stereo to be? That’s just my POV. I was listening to the increasingly great-sounding (I’m older) Tangerine Dream, and some Edgar Froese, and the ability to completely send a source sound to only one ear, means the stereo effect can be more radical? So this TD track, has a shimmering guitar chord that seems to be roaming R to L so much, it’s effect is it is rotating around your head, I realise. This is pretty good for the 1970’s or 1980’s. It doesn’t quite sound like it’s at the back of your head, it’s more like it goes around behind a barrier that’s behind you, but it’s awfully good. I also used earbuds to listen to a Robin Trower track (‘What lies beneath’ I think), and in this binaural mode of listening, I can picture with a lot of precision, the exact location in 3D space where the different instruments are appearing to emanate from. This really doesn’t happen with a pair of stereo speakers - you’d need a surround-sound system with a curved array rather like pixels of a modern computer screen/TV. Earbuds are a lot cheaper?! Anyway, these are my modest thoughts on the matter. I have one poser for you Dan - it MUST in theory be possible to make that TD track guitar that seems to rotate around me in binaural listening, actually sound like it’s going behind me? It must be possible to fake it, with only two devices attached to me, because I only have two ears, yet, I can tell in reality, when something is behind me. Is it too subtle an effect for us to capture without some AI system processing the sound, and simulating the effect of our ACTUAL ears, being hit from behind the ear instead of in front? Here’s my wacky theory - that it has not been done before now suggests to me it involves an effect that simple headphones or stereo earbuds cannot simulate, and that could possibly be moving air, hitting the minute hairs on the ear surface, and your brain by this method, knows the sound is from behind. Over a period of years, your brain will learn, what certain common sounds, sound like, from every possible angle. This database you possess, is much more complete however, for everyday sounds eg a motor vehicle, than say a guitar chord. So, making a motor vehicle sound, and getting it to sound as if it’s coming from behind the listener, must be easier, than trying to get a guitar chord to sound as if it’s doing exactly the same motion relative to our heads/ears? I maintain, that if some algorithm could be built (or generated, hence I said AI), to process sounds to make them appear to come from behind us, then it could presumably be tweaked to accommodate for our individual head dimensions and hair even, to really fool us. But, if the surface of the ear is involved, and the hairs, the brain won’t be fooled by earbuds alone - it will need a vacuous pair of headphones if not a ‘head-dome’ so that these sounds can in effecto of course, REALLY be made to come from behind us. As I say it would best be effected by a few dozen very small speakers spread across like pixels are, entirely ‘coating’ the inside of the ‘head-dome’, but this would be sort of cheating…because you’ve REALLY made the sound emanate from behind the ear! Surround sound also cheats, because again, it simply makes the sound come from behind you and therefore, is the same phenomenon in reality as a ‘real’ sound in the real world. I just wonder if it’s not possible to fake a fully rotating sound around your head, for this reason? In other words, why does my brain simply and clearly assume and arrange, stereo sounds, to be at a 180 degree arc in FRONT of me, as they seem to be, when using either binaural or stereo sources to listen to? There is some phenomenon at work, that means my brain doesn’t accidentally suddenly think the sound at say 1 o’clock to the right in front of me, isn’t at 5 o’clock behind me instead? I suppose you could do a test, by blowing air gently and silently at your ears, while listening to a track or sound, and see if your brain suddenly decided to treat the source as behind you not in front? Anyway, I’m waxing on. Thanks Dan for another enlightening video and tutorial on audio science. I’m off to get our cooling fan, and put my earbuds in, and see if Robin or the TD track, can be made to seem to be behind me, if my ears sense moving air from that direction!
I don't know about creating a 360 degree environment in processing, interesting idea. But binaural recording achieves what you describe. You might already be familiar, but it uses microphones positioned inside the ears of a fake human head. Check out virtual barbershop for an example.
Such a coincidence, that last night I watched his first video about LCR mixing and now I see this in my recommendations, noticing it's from 4 hours ago. I guess it's a sign for me to subscribe.
So I guess this is what the Infected Mushroom Wider VST does to the signal (as well as other stereo-widening effects) Thank you for the mid/side eq trick at the end, this is amazing
Huh, unexpected but great info. The trick in SlickEQ reduced weirdness and massively improved the sound picture to my ears, and I need to implement it. I had no idea I needed this.
which instruments do you use it on? I tried it on hard panned guitars, and this effect adds an unpleasant random panning effect plus to the depth. I mean real guitars, not that cloudy obscured background guitars Dan had in this mix :)
So, UA-cam decided I should be interested in this. While I'm not much into audio production, and I'm not an audiophile, this did remind me of something I did. A number of years ago, back when Winamp was *the* player to use, I sometimes encountered songs that were not great with headphones, usually because the mixing was just "this instrument goes in the left ear, this other one in the right ear". So what I did was grab the audio processor plugin, and I wrote myself a little audio processing program, that would take the input, apply a small delay, and add the result to the opposite channel with lower volume. The difference in the experience was amazing. I don't listen to music as much as I used to, these days. Not because I don't enjoy music, but rather because I have come to enjoy the silence.
@@makrowelle Just kidding. His tutorials really give you superpowers. But as soon as everyone knows about them superpowers will become standard powers 😁
I use Goodhertz Midside for this type of stuff, especially monoing the bass. I've never realised I could be introducing any phase issues before and I presume the Goodhertz people are smart enough to have taken it into consideration.
@@nexusobserve So, maybe all of us have been! Isn't a point of using Mid/Side that you CAN HPF your sides to clean up the bass. So... rather than casually high passing our sides, what's the new intentional option?
Brilliant as always, most of this even comes through on stereo Ipad speakers and I said wow out loud just before you did. Thank you Dan, can’t wait to try this!
Many EQs. Just look at a picture of the plugin. Sometimes it's referred to as Lateral/Vertical, Mono/Stereo, Mid/Side, Sum/Diff or simply M/S. If you use a highpass filter and set it as low as possible this will have the same effect.
@@willfrank961 I have pro q and a few other m/s capable plugins, but none of them have an allpass option. A highpass filter set as low as possible is NOT the same thing as an allpass, even on sounds that do not contain low frequency content.
@@elirosenkim3659 I didn't say it was the same. I said it has the same effect. As in is causes a phase shift. Is that incorrect? Happy to learn something new.
@@willfrank961 they both cause a phase shift, but the m/s eq would cause a shift in frequency response as well, and the allpass would be more flexible in tweaking the phase response.
Stuff like this is why I invested in Waves' virtual mix room plugin. I don't have the space to set up a proper monitoring setup and using a virtual mix room plugin at least gives me something close to sitting in a room listening on speakers rather than discreet headphone panning. It's made a huge difference to the overall quality of my mixes, if anything I've ended up with mixes that sound great out of speakers, and then sound absolutely massive in headphones. Definitely going to try a few of these tricks out too to get even more depth.
Dont tell this to any "audiophile", since they are gonna be livid. They will probably start telling you how headphone x presents music as though one was sitting in a concert hall. Well, provided you use some bogus 2000 buck cable and wait for the stars to align in a certain way.
No idea why the algorithm put this in front of me, but I'm not objecting too much. Most of that went over my head but I did get a small insight into what as a layman will just call "sound". I can see an almost infinitely deep rabbit hole here that I could lose myself in. I think for my sanity I shall carefully go around it. That said I will carry with me a greater respect for those people in the field.
As a young man I immersed myself in music, mixing and production. I'm now middle-aged and wish I'd undertaken something less complex like rocket science.
lol
I went from molecular pharmacology and medicinal chemistry to audio engineering and it's far more convoluted and mysterious.
In rocket science, fuel oxidizer, and a spark will always go "boom"
No joke, thousands of hours in audio production/recording/editing, hundreds in Kerbal Space Program, and the audio stuff is harder by far.
The reason why audio is harder is that we're dealing with something that is both tangible and intangible at the same time.
Rocket science, you can rely on hard numbers to produce a predictable result.
Audio, move the speaker slightly, and it messes up everything and you have to do everything again from the beginning.
Just when you think you understand something, Dan absolutely blows your mind and makes you realize how much is left to learn and discover in audio
And how much of it is right, if you add just a little phase shift. ;)
I’m always in awe of Dan’s ability to meld his technical and artistic talents into these intimidating displays of audio engineering skill. I’m an engineer and a musician, and to me it feels like he has the analytical capabilities of the most rigorous scientists combined with the empathy of the deepest creative. And then he’s generous enough to share lessons like this with us. I share this channel with anyone who will listen. Thanks Dan.
Agreed
Artistic talents?
talented artist
I so second that emotion!
And then to have a speaking voice like butter on top of all that!
This is absolutely wild and you're a great explainer.
Nice to seee you here Jeremy
Dan Worrall is tutorial perfection
Dude, he's amazing.
It’s not that ‘wild’ … practical if anything
He’s absolutely right, but he’s only proving a point, whilst missing the bigger picture:
The majority of people today listen to music with headphones. Only a tiny percentage of that majority listen on headphones that are worth a damn. I’ve been in the game for 25 years and mixed for several A-list artists, and I understand completely…
but if I were smart, I’d probably just save my money and time, and mix on Apple wired earbuds, because at the end of the day, that’s still what the majority of people use, and this fact and trend isn’t going anywhere.
The days of audiophiles with large, expensive gear, with large expensive speakers sitting on pyramid points with little slabs of slate under each point, are over. In 10 to 20 years, that generation will be dead, and so will the days of the home living room audiophile listener.
Don’t even get me started on car stereo systems 🙄 There isn’t a seat in the car that comes close to providing an accurate stereo image.
So if we were all smart, instead of being purists, we would mix on those Apple earbuds, and check our mix in the latest, highest selling, shitty economy car, and let the past go.
It’s the hard truth, but it’s the truth ✌🏻
There’s a part of me that realizes that this is incredibly insightful and important, and then there’s the part of me that doesn’t quite understand anything Dan talking about. I’ll have to watch again tomorrow.
Watch Dan's video about mid-side processing first and that should make it easier to wrap your head around.
You're not the only one
There's also me who doesn't know how most of this is done in Reaper
@@marcchurch1245 do you know the name of the video sir ?
@@gooda864 it's called Mid-Side Demystified. It is a Dan Worall video but it's not on his personal channel, it's on Fab Filter's channel.
I have been perplexed by this exact phenomenon for literally years with zero understanding of what it even is or how to properly describe it. That sense of "the sound is tiny and inside my ear", or just otherwise poorly placed, but with seemingly no way to put it where I want it. Randomly stumbling upon this video and hearing the moment you enable the phase shift was like having the universe's secrets laid out before me. Absolutely stunning.
The most simple but dramatically effective thing I've ever seen in music production so far is literally 1ms of delay. I'm floored.
What’s it doing to the mix exactly?
@@leaveitorsinkit242 I'll take a stab at it. The speed of sound is roughly 1ms per foot, so if you delay the sides by 1ms (millisecond), it will sound like it was recorded from 1 foot back from the mids. It helps give it some separation, specifically depth. It is essentially shifting the phase of the sides in comparison to the mids. Anytime you introduce a tiny delay, it shifts the phase slightly. You can drive yourself crazy messing with the phase correlation and making sure the phase is "perfect", but in reality you don't really want it to be "perfect", you just want it to sound good and make sure that it's not canceling the frequencies that you want of the instrument.(fully understanding phase cancelation is critical, and what I consider the quintessential intermediate mixing knowledge) If you mic up a drum kit with close mics, overheads and room mics. All of these mics will be getting the signal at slightly different times, and that is what we want, otherwise it would all sound like it was a bunch of mics the same distance away and wouldn't sound like a 3d space. It's the subtle phase differences that allow our ears to perceive the distance involved and therefore depth. Of course we take great care at the time of tracking to make sure the phase isnt causing cancelation, by slightly moving the mics closer or further away (inches of difference) until it sounds full and rich when listening to all the mics together. If you went through and manually lined up all the tracks where all the peaks and valleys were perfect, than the phase would be "perfect " but it wouldn't sound like it had the space that you were going for when you put up all those mics. It would actually ruin the effect. So knowing that, you can artificially recreate this effect by using short delays, like 30ms and below, remembering the 1ms=1ft rule. Those types of delays can help you shape the soundscape from front to back, along with many other things. But keep in mind that you will only hear the effect of the delay spatially if you are listening to the delayed track in reference to a same or similar un-delayed track (just like the sides are percieved against the mids in this track because a lot of the same information is in both the mids and sides). A delayed track on its own will only sound like it was played a little later, but heard with the same signal un-delayed, it starts to change the spatial relationship. There are also creative effects with short delays. Look up a video on the Hass effect, where you can actually make something sound like it's even further left or right than hard pan LR. That trick has been used in many famous productions and I feel like it will help you understand what this trick in the video is doing, just slightly more exaggerated with the Hass effect. Phsycoacoustics. Yay! Hope this helps, 🙏 👌 have a great time on your mixing journey.
@@aholder4471 just to confirm, if I went into audacity, and delayed the right side (assuming I planned both channels left and right) by 1ms, it would provide the Illusion that the soundstage is wider?
@@TimberWulfIsHere Yes, but he is using a middle/side feature of a plug in and delaying the sides by a ms in this video, not left and right. If you went in and delayed the right channel of the entire mix, it would honestly just mess your Soundstage up and probably not be what you're looking for. I would leave the delay of left vs right to individual instrument tracks, not the whole mix but hey, it can't hurt nothing to try it. That's the best and worst part of digital audio, you can try a bunch of stuff and not have to commit to anything.
@@aholder4471 I just played around with it, it actually does work, but i can also see what you mean; it does sound ever so slightly out of sync for some sounds at least.
I'm a developer for FL studio mobile and this video inspired me to add m/s delays to it in my first week of working there - just thanks to this video.
Thanks Dan 🔻
Exit: talking about the spacer module in flm
👋
hey I started with mobile man that´s so cool
@@tysk_dt eyyyo xD
Wow, I have to thank you man. Days that I don't have my pc, I was using fl st mobile. It was carrying me.
Personally ive never understood this "no one listens in mono" argument. Even if your mix is never played back in mono, using M/S monitoring to shift density and dynamics makes a worlds difference in getting a mix that is perceivably "wide" but maintains depth and focus.
Michael Paul Stavrou agrees with you, and I agree with him.
earth to star trek bridge control: nobody cares, go home
People with functioning ears care.
exactly, but i do understand those who don't get mono mixing. Because the newer generation that do their work from home, only use stereo mix. and then its up to the one who makes the master of the song, to do the mono lvl'ing (as i would call it, when its only mixed in stereo speakers :) )
but then again many newer headphones that mainstream users buy are fake stereo where there is a driver that spread the sound between the 2 mono left right speaker so it sounds "true", and by true i mean massively coloured and therefore not the true mix you are hearing :)
There's another thing that's AMAZING about this technique that might not be so obvious at first: Mono-compatibility. Utilizing a mid/side channel delay will NOT affect the timbre OR level in mono playback at all (give or take a fraction of a ms of delay on whatever you apply the technique to). This is because when converting to mono, hard-panned material will not change in level, and the side channel will disappear completely. This is unlike an LR imaging technique like Hass delay or LR filtering, which will introduce a sort of flange/comb effect when you convert to mono. Genius!
Yes indeed! I was initially flabbergasted about how this doesn't screw with mono-compatibility at all.
I'm still trying to wrap my head around what I'm hearing exactly, and why. I mean I know what M/S processing is and how the encoder/decoder works (which is simple enough in itself), but I still find it generally quite counter intuitive how things behave sonically when processing them separately.
I suppose the M/S terminology is already confusing. Sum/Difference captures the meaning of the process more accurately imho.
But anyway, after playing around a bit with delaying the mid-channel, I wouldn't slap a mid-channel delay on the mix-bus at the end of a mix session as it screws around too drastically with some very specific mixing choices I made. But this technique certainly opens up some very intriguing possibilities when applied with focus and creativity! (as Dan demonstrated extremely well)
Now we'll see 30 "next level" tutorials within the same month with this secet trick of the pros mid side delay trick.
Invade the channels and bring em here
Lmaoooo I literally had the same thought hahahahaha
oh god
It is absolutely incredible the fact that you are handing out knowledge of this depth for free. This is specially great for vocals or any kind of non-digital instrument that one may record on studio. Amazing stuff man, keep it up.
I can’t believe I was doing this on vocals already. I guess you really should trust your ears
Damn this is incredible. Been chasing "depth" for the longest time, and I'm just blown away right now.
I just had that revelation, too. It's been depth I've been seeking. Mind blown.
@@FranklynMonk Yes. It's literally like Width got shifted 90 degrees.
I'm deaf in one ear. Downmixing to mono is a matter of accessibility.
Dan is in a league of his own when it comes to getting technical while still making it engaging and easy to understand.
I have been making music in DAWs for 20 years, since I was 12.
This video finally explained to me and made me understand, what the hell a sound engineer's job is, and why it's different and separate from what I've been doing.
Wow, mind blown 🤯! If Dan did a mix of a song from scratch, showing his workflow, plugins and what he listens for, I would pay good money for that.
why there are millions of videos on youtube like that already
@@codesymphony Quality vs quantity my friend. Thousands of videos with half baked or outright incorrect information is no good.
@@bluestringmusic Correct. I've seen hundreds of "tutorials" on music production to this day. Most of them hardly taught me anything. With this video however, I feel like I just went on an experimental trip. Tons of great information here, something many do not include into their videos. This is QUALITY demonstration, not some over-the-top animations, 10 second cuts of the work process or time-lapses. Some people also like to just slap bunch of filters on their mix, while they're convinced it's always the same process to mixing and then they call it a day. I wouldn't know delay made such a profound difference, or how the channels are encoded after panning hard left, right, mid. It feels good to hear a difference, but also UNDERSTAND why there is one.
Every sound I hear is stereo because I was born with a magical gift:
Ears on both sides of my head, one aimed left & one aimed right.
I have over 10k in home theatre, studio, pro audio, and simple Bluetooth speakers everywhere in my house. My wife still listens to all her music on the phone.
Yup, this is relatable.
Convenience is usually what people will go for, I admit I'm guilty of doing that myself when I can't be bothered to plug it in to something :D
I hate when she plays music on her phone... All I hear is “chk, chk, chk...”
Each eardrum is a world...
@@whereismyleafage1887 not even mono.
I hope they never heard Led zeppelin on it :)
5:30 was still a tad difficult to pick up on, until you solo'd an element out, thanks for teaching me something I always felt but didn't know how to acknowledge, or label
That background track was really relaxing. I was almost meditating over the issue of video. good job 👌🧘♂️
Pink Floyd-ish ;)
it's tecnicly wrong to call it background track in this case lol
In The Air Tonight drum vibes.
Agreed
Well this was really interesting and mind-expanding. The clickbait title was answered within 90 seconds, but as I heard him expanding on what you can do with it once you know that, I couldn't look away. This seems like the result of decades of engineering experience, the sort of stuff I'm not even sure Alan Parsons was fully aware of and utilising.
Add to that the fact that all this improv source material is by the same guy. Absolutely brilliant.
I will have to watch this a couple more times before I fully understand it, I think.
Please, do a video on that ambient guitar effect! Sounds awesome
I love airwindows
I zoomed in and it said RealVerb
@@nexusobserve well i just made a wild guess and i'm pretty sure it would yield in a similiar sound
No Airwindows. No phaser. Guess again ;)
@@DanWorrall my first guess would've beena spring reverb
I was completely blown away by this, now my mixes sound rich and deep, thanks a lot man
Fuck yeah Dan. I can't tell you how much I appreciate your sharing these pearls gleaned from years of deep diving. 🙏
Wow! that was a real metaphorical ear opener! I'd listened to the video a couple of times and didn't get it, but when I actually tried it in one of my sessions, the difference was stark. Actually I still don't understand why this effect happens but it has blown my mind - or perhaps that's why! Many thanks!
Fantastic video! I actually used this very same technique to make a mono-to-stereo, fully monocompatible processor (i made a reaktor ensemble). It injects mono signal into the side and delays it by an amount chosen by the user (small amounts btw).
I discovered that this works very well also on stereo tracks, not only mono ones!
By watching this video i realized that dalaying the mid channel is SO COOL! Also the all-pass is a very great idea!
I sould modify my ensemble to make it a powerhouse filled with Dan’s knowledge 😀
When I like something, I want to know how that thing/process works, and most of the times that leads me to the deepest roots of that thing (even to electrons or quantum theory). On that journey, besides finding new ways of editing the final product, I also find new ways of exploring the creativity/artistic process. It gives me new posibilities, new ideas ... it widens the horizons ... It is so beautiful to learn about how things work. We are so lucky to have Dan Worrall sharing his knowledge with us!
When I mix to stereo (I'm moving more toward surround) I feel that it's absolutely necessary to have motion. Some parts can just sit where I place them in the field but other parts really need to travel. I've noticed that some note sequences gain emphasis through motion, like as if our hunter instincts are trying to pin down their location. While other parts can be de-emphasized by scattering them. Some notes need to start in one place and reach completion in another. I frequently automate 'Wider' to send particular parts out or in. I seldom use algorithms, like LFO's, to produce motion but perhaps I'll begin more to follow your example at the end of this video to give static parts more of a living pulse.
I certainly agree with your contention about headphones with the addendum that it's utterly impossible to keep our heads absolutely motionless unless one were to painfully clamp one's head into a fixed position. Perhaps this is another reason why headphones contribute so much to ear fatigue.
I like to watch people's heads move as they listen to my work.
Your voice, the pacing of your speech and the music are ASMR. Listening on a phone with Bluetooth speaker.
I use Melda's MUtility for this kind of stuff, because it's able to do a lot of things besides what's on this video, while still remaining free; do haas, do invert stereo, mid delays etc.
If you don't mind me asking, how do you delay only the mid channel with MUtility? Can't figure it out :/
Thanks for the tip! I realize I already have Mutility, but have never even taken a look at it.
This was really an outstanding video! Thank you DW.
Assuming high quality headphones compared to high quality stereo speakers with both powered by quality electronics with a high quality music source: IMHO I prefer speakers. I'm just a retired 70 year old music lover. My standard of reference was always LIVE music. Particularly un-amplified music. Nothing beats the full-body open-air experience. That being said, headphones may allow you to hear a particular detail that might get lost in a speaker setup. But, as DW points out, they are not stereo. And that's fine!
It's been a while since I've actually learnt something new from a tutorial, but man this was informative.
right?
i know zero about what you're talking about but your voice and the the music made me listen to the whole thing
I've learned to watch all of Dan's videos regardless of if I think I care about the topic... (what do I care about LCR mixing?) -- I always end up learning a ton of fascinating general audio art and science. Thanks Dan.
Seriously! I learn more in a video where Dan looks at a specific plug-in that I’m never going to use than I do from any other UA-cam tutorials.
@@TjMoon91 Dude I got a masters degree in Music Production at Berklee College and we didn't even scratch the surface of this kinda stuff Dan explains so eloquently. Could've saved me a boatload of cash!
Best UA-cam homepage recommendation I've gotten in months, how had I never come across this channel before?! Immediately subscribed, your content is just what I needed to nerd out on!
i've been doing all sorts of ITD and HRTF panning in my work for some time and i'm happy to see something along the same lines here - f the phase police indeed, 2-channel audio is a playground
Brilliant title, I instantly knew I'm gonna learn something new here. Well done. As a side note I do love hard panned instrumets on headphones, theres nothing unnatural there for me.
As an aspie ADHD engineer I guessed the entire point of this video from the opening negative pressure on my left ear. Fantastic video mate!
There are people who got this recommended, but if you know how to set up your stuff, this demo is kinda "anstrengend" to listen but also gold.
True understanding begins with listening^^
Fantastically explained, i've noticed that on a lot of vintage songs the panning is hard left and right. More modern songs tend to use the special stereo headphone sound
It's called anti-phase
I have absolutely no frickin idea what you're talking about most of the time. This is why producers with a creative bent need engineers with technical expertise. You could probably do my job and a tech's job no problem. Incredible.
Been watching different mixing, mastering and processing channels and none of them even mentioned that dilemma.
Glad I found out about you via FabFilter and I admit I stick to your content, because it's easily explained, well represented and good to recreate.
Thanks Dan, keep it up 😊
That 1 millisecond delay blew me away! Incredible lessons here!
Sometimes listening to Dan is like listening to a warlock explain the dark arts. That delay effect is so drastic and sounds so good yet so easy to achieve, souls must have been bargained with to learn the trick! 🤯
I don’t even hear what’s going on sonically. Like… how’re all his hard panned elements “chasing” each other in the 3-D spectrum?
@@leaveitorsinkit242 watch it on a good pair of headphones
Sooo. After watching the video 5 times probably with dif headphones (on the gaming ones evn tough its panned straight right i was able to hear in on the left so i had to figure that out) i think i finally completly understood the whole thing.
Absolutely mindblowing!
Especially the compatability that comes with it. Crazy
If I watch more than one of Dan’s tutorials in a row I need to have a nap for my brain to recover. Amazing content. I think I am done with all other tutorial channels.
i'll need to watch this over and over to really understand what i need to do. this system is beautiful.
This is fascinating! As someone interested in composition and mixing, knowing these tricks (and pitfalls) seems very worthwhile.
Would say underrated video, but you've already got hundreds of comments, so... KILLER video.
This is a game changer! I've always hated hard panning my tracks because It felt unnatural, yet I always heard people talking about LCR mixing. This filled in that gap I had in my understanding and I'm excited to play with this new knowledge. Thanks a million!
since this video ive now incorporated Volcano3 into my mastering chain, and use it simply for allpass filters. 2 of them. rotating phase around selected frequency really is a game changer for mastering. ive gotten so many mixes that i feel are phasey in some way or another, and with this new found knowledge i can correct that fairly easily. so far ive not noticed any negative side effects from doing this, only very positive ones. such as increase defintion in the lows, and reduce phasiness in the stereo field. So THANK you dan, for this information.
The most natural trick I personally use is utilizing some Early Reflections. It somehow emulates the natural room and brings the hard panned elements into a real space.
I also tend to give the other elements of the mix the same Early Reflections so that the whole mix sounds more cohesive. Amazing video BTW Dan!
I am the same, I like having a small space reverb with mainly ER (like a studio convolution or something) and feed everything to it to fill in the distance between instruments.
The most beautiful imaging tutorial I've ever seen. Thanks Mr. Worrall.
I personally love the un naturalness of hard panning headphones. I mean it’s easy to go over board but it’s also easy to make something sound really interesting.
Done as an effect, it’s fine. But the problem is most stereo mixing comes down to a choice of what equipment to cater to. When it’s done poorly, it’s really jarring to me. Loads of old stereo stuff clearly wasn’t made for headphones. I wish there was a quick correction for it at the music player level.
@@CosmicHippopotamus old stereo stuff from the 60s for example where the drums are hard left, bass hard right, only vocals centered or whatever. really jarring
Teardrop by massive attack does panning very well hehehe
Man, this is brilliant. I'm not going to pretend I understand the nitty gritty, or what should sound good or the technique. But I know for a fact that the basis here of phase effecting how you here panned stereo is easily applicable. I don't need to understand all of it right now to know just being aware of this will improve my stereo mixes.
Tell you what, helps that you make these videos like you're a mystery youtuber talking about an unsolved murder. But instead it's an indepth explanation of audio engineering.
This LCR in headphones problem has always bothered me! Especially because I also often work with mono sound sources. Generally my go-to fix would be to use plugins like Ozone Imager or ThrillseekerXTC to stereoize the sound slightly. But I love what I'm hearing here and can't wait to try it out.
Presumably the reason delaying the centre is potentially more pleasing is because in nature the absolute phase of a signal originating on the centre line is more or less irrelevant (depending only on distance), while signals originating off centre always arrive at one ear before the other, so that the lateralised hemibinaural (is that a word?) percept is naturally available before the spatially integrated one. Indeed, that's how your reflexes know which way to spin your head when you're startled.
I'm not arguing that delaying the centre corresponds directly to any real-world acoustic phenomenon, just that the lateral signal arriving first is more “normal”.
You can probably get the opposite effect naturally, too, occasionally, but you might need to stick your head in a pipe or something.
Many thanks, incidentally. I had got halfway there screwing around in Eurorack, but you gave me the impetus to think about it carefully.
Quick Tip: If you are trying to do the Voxengo Sound delay phase in Abelton Live, don't pan the whole track, put in a utility before the delay and pan that. The pan on the actual track will affect all of the audio in the effect chain, and it won't work.
Wow. One of the most educating and enjoyable videos I’ve watched in a while. The production quality is beyond this world!!!!! I’m not surprised that an audio engineer has that good of voiceover audio, I’m just flabbergasted by how much effort you put into your videos!
Hey Dan, did some research regarding headphones earlier this year - the answer I came to for classifying the how headphones and stereo speakers interact with our ears was the terms "Diotic" and "Dichotic".
Dichotic meaning that both ears hear different stimuli, and diotic meaning that both ears hear the same stimuli.
I posited that dichotic and diotic properties could exist on a spectrum (how different sound is at each ear), and also that it was a statement about the physical properties of listening via loudspeakers/headphones rather than production qualities.
With stereo loudspeakers, hearing in a diotic manner is never possible, as naturally you note that the head (and pinnae) shape and change audio flowing into the contralateral ear from the left or right speaker respectively.
Thus listening 'in the open' (loudspeakers) can never truly give both ears the same exact stimuli (which is part of our binaural localisation system for placing sounds in a 3D environment).
But It also may come down to how accurately one wanted to measure the differences in both ears to measure (unsure if currently a word) 'dioticity' (how similar your ears hear any given sound?..). And whether the properties of dichotic and diotic should be restricted to properties of the physical realm (listening device), or if they should be considered holisically - including production considerations (i.e. if mono audio).
Hope to have provided something to consider!
Keep the videos coming 😁
EDIT: Oh and also yeah, I personally (for whatever that is worth) consider stereo NOT to relate to whether something is binaural, dichotic or diotic. To me, stereo is strictly about having two channels of audio. I do feel that we need a more expansive vocabulary for discussing the differences between headphones and loudspeakers, not suggesting that this is squarely your responsibility of course haha! Hopefully this may help spark thought in other's minds though 😊
Yes... You have intiated some response flurries in my cerebral cortex.
Yes, when he says it's "pedantic" to call headphones "binaural" and speakers "stereo", I'm not convinced it's pedantry. I think it's actually just wrong. :-D "Stereo" is not just used to describe two-channel audio, it's more of a medical or scientific term to denote that something has correlated pairs of stimulus. Think of 3D glasses, for example.
There's a vernacular implication that when you say stereo, you mean audio content that has a soundfield presented with two channels of audio. Contrast that to dual mono, which is just two channels that may or may not have anything at all to do with each other. Or binaural, which I've always taken to convey something a little harder to describe -- like a stereo presentation that is meant to convey 3D space the way your ears would perceive if they were _in_ that space.
Suffice to say, I don't think stereo vs. binaural has anything at all to do with the mixing of audio in space on playback. If it were used in that context as a clarifying aid for the purposes of discussion, then fine. But I feel a blanket statement like "headphones are not stereo (they are binaural)" is probably verging on a fallacious statement for the purpose of being sensational.
Hi Dan! I keep coming back to your channel. This is supreme content. I am by no means as educated on the topic of sound as you are. But being able to learn why my mixing skills suck is kind of priceless. I have watched and listened to all of your videos on this channel (the fabfilter ones as well) And i got to say:
Everytime i consume your content i get amazed. Spot on. To the point. And most importantly why! Thank you for the awesomeness!
dear lord ...
the depth of this man's knowledge in all aspects of sonic production is apparently unfathomable ...
I am officially hooked ...
What an ear opener! Great tutorial. Tinkering with the Voxengo plugin I'm finding you can get sounds to feel like their reaching for the backbone of the mono Bluetooth speaker. Even though the sense of width isn't there as with earbuds, it's fascinating how you can achieve depth even in standalone mono Bluetooth speaker and play with that. Interesting also was combo of Voxengo and a Limiter in depth perception, I can't explain it but yummy!
You've just explained something I do without knowing why I was doing it. 😂
Thank you
When using only 1 hard-panned sound, it's like you going deaf on 1 ear without filter. Obvious and amazing!
Subscribing.
This was exactly the knowledge-gap I needed plugged right now to take my mixes higher! Thank you Dan!
So now at last we get to know how you get the depth in the recordings you've used in your tutorials. As you've hinted at before - phase is your friend. And like many people who've commented here, I'm going to watch again straightaway.
Dan, your videos are so informative. Here's a like. Cheers.
OOH that last one looks extremely useful. I might never use it across the entire master, but using it on purely ambient pads and backgrounds could add a whole layer of depth.
This is an absolutely brilliant video. Thank you for sharing your wisdom with us, Dan!
I started to understand this more having recently used Boz Digital Labs Pan Knob with both the delay and X-over pan functions enabled. Brilliant plug-in. Had no idea why it was doing what it was doing and it kind of led me to this video. Thanks as always Dan. Such a good explanation
It'll take me a couple of watches to digest this but I already know it sounds great. Can you use this technique on channels that aren't hard panned but that are moved just left or right of center to bring them back into cohesion when listening on headphones? I don't usually pan very much at all because it grates on me even when listening with stereo speakers but this could help to form a better image perhaps?
.
Search for a video called Smart Panning on SamiRabia’s UA-cam channel
Absolutely blew my mind. You are a fantastic presenter and you paced this incredibly well. As I was going to share this with someone, I was shocked to realize that I was nearly at the end of the video. The flow of the video is smooth af.
I love this stuff! Dan’s great, and if we all just appreciated having the input of an experienced sound engineer for free or very little money, there is no argument. And if Dan did not have a point of view of one flavour or another, it would make him rudderless just like it would make anyone rudderless. Once you have a POV you can begin to work from there, and even account for your own bias in having it. I have come to the conclusion about binaural, that it’s superior to stereo, and in fact, is what early on people may have expected or wanted, stereo to be? That’s just my POV.
I was listening to the increasingly great-sounding (I’m older) Tangerine Dream, and some Edgar Froese, and the ability to completely send a source sound to only one ear, means the stereo effect can be more radical? So this TD track, has a shimmering guitar chord that seems to be roaming R to L so much, it’s effect is it is rotating around your head, I realise. This is pretty good for the 1970’s or 1980’s. It doesn’t quite sound like it’s at the back of your head, it’s more like it goes around behind a barrier that’s behind you, but it’s awfully good.
I also used earbuds to listen to a Robin Trower track (‘What lies beneath’ I think), and in this binaural mode of listening, I can picture with a lot of precision, the exact location in 3D space where the different instruments are appearing to emanate from. This really doesn’t happen with a pair of stereo speakers - you’d need a surround-sound system with a curved array rather like pixels of a modern computer screen/TV. Earbuds are a lot cheaper?! Anyway, these are my modest thoughts on the matter.
I have one poser for you Dan - it MUST in theory be possible to make that TD track guitar that seems to rotate around me in binaural listening, actually sound like it’s going behind me? It must be possible to fake it, with only two devices attached to me, because I only have two ears, yet, I can tell in reality, when something is behind me. Is it too subtle an effect for us to capture without some AI system processing the sound, and simulating the effect of our ACTUAL ears, being hit from behind the ear instead of in front? Here’s my wacky theory - that it has not been done before now suggests to me it involves an effect that simple headphones or stereo earbuds cannot simulate, and that could possibly be moving air, hitting the minute hairs on the ear surface, and your brain by this method, knows the sound is from behind. Over a period of years, your brain will learn, what certain common sounds, sound like, from every possible angle.
This database you possess, is much more complete however, for everyday sounds eg a motor vehicle, than say a guitar chord. So, making a motor vehicle sound, and getting it to sound as if it’s coming from behind the listener, must be easier, than trying to get a guitar chord to sound as if it’s doing exactly the same motion relative to our heads/ears? I maintain, that if some algorithm could be built (or generated, hence I said AI), to process sounds to make them appear to come from behind us, then it could presumably be tweaked to accommodate for our individual head dimensions and hair even, to really fool us.
But, if the surface of the ear is involved, and the hairs, the brain won’t be fooled by earbuds alone - it will need a vacuous pair of headphones if not a ‘head-dome’ so that these sounds can in effecto of course, REALLY be made to come from behind us. As I say it would best be effected by a few dozen very small speakers spread across like pixels are, entirely ‘coating’ the inside of the ‘head-dome’, but this would be sort of cheating…because you’ve REALLY made the sound emanate from behind the ear! Surround sound also cheats, because again, it simply makes the sound come from behind you and therefore, is the same phenomenon in reality as a ‘real’ sound in the real world.
I just wonder if it’s not possible to fake a fully rotating sound around your head, for this reason? In other words, why does my brain simply and clearly assume and arrange, stereo sounds, to be at a 180 degree arc in FRONT of me, as they seem to be, when using either binaural or stereo sources to listen to? There is some phenomenon at work, that means my brain doesn’t accidentally suddenly think the sound at say 1 o’clock to the right in front of me, isn’t at 5 o’clock behind me instead? I suppose you could do a test, by blowing air gently and silently at your ears, while listening to a track or sound, and see if your brain suddenly decided to treat the source as behind you not in front?
Anyway, I’m waxing on. Thanks Dan for another enlightening video and tutorial on audio science. I’m off to get our cooling fan, and put my earbuds in, and see if Robin or the TD track, can be made to seem to be behind me, if my ears sense moving air from that direction!
I don't know about creating a 360 degree environment in processing, interesting idea. But binaural recording achieves what you describe. You might already be familiar, but it uses microphones positioned inside the ears of a fake human head. Check out virtual barbershop for an example.
Even as a fan of that 'unnatural, hard-panned sound', it doesn't work for everything. Thanks for the video, man. You're a legend.
Such a coincidence, that last night I watched his first video about LCR mixing and now I see this in my recommendations, noticing it's from 4 hours ago.
I guess it's a sign for me to subscribe.
Every video on this channel is in a whole other level. High quality information for free. Thanks!
Yes! Depth, not width - totally agree. Thanks Dan, brilliant as ever :)
So I guess this is what the Infected Mushroom Wider VST does to the signal (as well as other stereo-widening effects)
Thank you for the mid/side eq trick at the end, this is amazing
2:55We want you to show us HOW YOU DO EVERYTHING!
👍 yes 👏
No cap + the guitar effect
That "mysterious little option" instantly added so much depth, like night and day.
this is insane. I’d kill to have a tenth of this mans knowledge
Huh, unexpected but great info. The trick in SlickEQ reduced weirdness and massively improved the sound picture to my ears, and I need to implement it. I had no idea I needed this.
That all-pass filter technique went straight into buses of my template. Thanks again Dan for a neat trick!
which instruments do you use it on? I tried it on hard panned guitars, and this effect adds an unpleasant random panning effect plus to the depth. I mean real guitars, not that cloudy obscured background guitars Dan had in this mix :)
You don't have to modulate the all pass filter. Usually I wouldn't: this is more of a special effect.
@@DanWorrall oh thanks Dan!
So, UA-cam decided I should be interested in this.
While I'm not much into audio production, and I'm not an audiophile, this did remind me of something I did.
A number of years ago, back when Winamp was *the* player to use, I sometimes encountered songs that were not great with headphones, usually because the mixing was just "this instrument goes in the left ear, this other one in the right ear".
So what I did was grab the audio processor plugin, and I wrote myself a little audio processing program, that would take the input, apply a small delay, and add the result to the opposite channel with lower volume. The difference in the experience was amazing.
I don't listen to music as much as I used to, these days. Not because I don't enjoy music, but rather because I have come to enjoy the silence.
We are getting to a point, where everyone here should sign an agreement not to talk about Dan Worrall in the outside world. 🤫
This is gold!!!
so you don´t like to share? why so afraid?
@@makrowelle Just kidding.
His tutorials really give you superpowers. But as soon as everyone knows about them superpowers will become standard powers 😁
@@Mansardian and wouldn´t that be nice? think of all that good sounding music...
Three seconds of listening to this video, and I knew I had to subscribe. Quality mixing there, Dan.
What the hell? That's all. I'll be watching this one 10 times. Until then, I won't be casually high-passing my side signal...
I've been casually hipassing my side at 80-110 Hz with the brainworx hybrid eq. This just made me way more careful about that
I use Goodhertz Midside for this type of stuff, especially monoing the bass. I've never realised I could be introducing any phase issues before and I presume the Goodhertz people are smart enough to have taken it into consideration.
@@AfferbeckBeats after watching this video, I wouldn't presume anything.
@@nexusobserve So, maybe all of us have been! Isn't a point of using Mid/Side that you CAN HPF your sides to clean up the bass. So... rather than casually high passing our sides, what's the new intentional option?
Don't filter the side channel unless you need to; listen to how it changes the stereo image; if you don't like that change use a linear phase HPF.
This might be the best music production video ever. Bravo.
This is why MSED gets put on everything in mixdown.
Can you elaborate? How do you utilize MSED?
Brilliant as always, most of this even comes through on stereo Ipad speakers and I said wow out loud just before you did. Thank you Dan, can’t wait to try this!
Incredible vid as always. Can you (or anyone else) recommend a mid-side allpass capable plugin that would work in other daws besides reaper?
Many EQs. Just look at a picture of the plugin. Sometimes it's referred to as Lateral/Vertical, Mono/Stereo, Mid/Side, Sum/Diff or simply M/S. If you use a highpass filter and set it as low as possible this will have the same effect.
@@willfrank961 I have pro q and a few other m/s capable plugins, but none of them have an allpass option. A highpass filter set as low as possible is NOT the same thing as an allpass, even on sounds that do not contain low frequency content.
@@elirosenkim3659 I didn't say it was the same. I said it has the same effect. As in is causes a phase shift. Is that incorrect? Happy to learn something new.
@@willfrank961 they both cause a phase shift, but the m/s eq would cause a shift in frequency response as well, and the allpass would be more flexible in tweaking the phase response.
Stuff like this is why I invested in Waves' virtual mix room plugin. I don't have the space to set up a proper monitoring setup and using a virtual mix room plugin at least gives me something close to sitting in a room listening on speakers rather than discreet headphone panning.
It's made a huge difference to the overall quality of my mixes, if anything I've ended up with mixes that sound great out of speakers, and then sound absolutely massive in headphones.
Definitely going to try a few of these tricks out too to get even more depth.
Good knowledge
The most interesting audio tutorial I've seen in a year. Bravo!
Was that the last time you watched a Dan Worrall video?
Dont tell this to any "audiophile", since they are gonna be livid. They will probably start telling you how headphone x presents music as though one was sitting in a concert hall. Well, provided you use some bogus 2000 buck cable and wait for the stars to align in a certain way.
The phase issue with hard panned elements and EQing is something I had almost entirely forgotten about, thanks for the reminder!
No idea why the algorithm put this in front of me, but I'm not objecting too much. Most of that went over my head but I did get a small insight into what as a layman will just call "sound".
I can see an almost infinitely deep rabbit hole here that I could lose myself in. I think for my sanity I shall carefully go around it. That said I will carry with me a greater respect for those people in the field.
Thank you, you are my new favorite music production channel.