Now THIS is what sound "engineering" is all about. A true sound/audio engineer intimately knows about the physics of the medium they deal with. Thanks for the video Dave!
I remember watching the original video before this remaster. Probably the most beneficial piece of knowledge I've gained and applied to sound system tuning. Quick on the fly equalization of the front of house system is so much easier with signal generators for left and right mains. I get a clear comb filter-free reading when using this method. On a good day I'll get a left right and sub set up tuned in 15 to 20 minutes. That may seem long to many but I'm still using old RTA technology. Haven't had the time to convert to OSM or purchase Smaart.
Dave was so spot on by having two different arrays. One for vocals and some instruments (?) and the other array for the rest. The only way in my humble opinion to do it. ❤
@@DaveRat I know you’re busy and I’m a nobody but I really would appreciate your help. I’ve just ordered new ultimate ear ue-18 pros for me as a drummer. Unfortunately, since I no longer play professionally, I don’t have a clue to make them sound great. In the past I had a rack unit on stage, click track in my left ear, using a Shure Psm-900 wired belt pack, and I. My right ear was together band, my vocals through a Beyer m-88 to or a crown cm 311, which I didn’t care for. Any advice please or perhaps help with purchasing a used micro wedge as my fail safe? As a drummer, Chad and RCP , darn their music has gotten me through horrible times. I’ll send more money when I can, love Arthur
No I really appreciate it you really don't need to send me money. These videos and sharing of info is a hobby for me that I do to help others. Financially I'm doing just fine from my other endeavors. Save your money for sound gear!
Genuine question I’m not being cheeky: This information is so eye-opening, but how do I practically apply this to my gig? I work in a small club with an X32 most of the time. How do I implement getting a different signal to each side? I can’t just mic up everything twice because I have limited mics and inputs. Some things just have to be mono. Is there a more practical way to solve this issue when having a stereo source from everything isn’t an option? Like maybe a plugin/insert on the Main L/R out? Can I maybe EQ or delay each side slightly different?
Keep things in perspective. With smaller systems, in smaller rooms, stage sound is a significant part of the audio that naturally softens many of the issues of same sound sent to speakers. Maybe watch my vids on panning types. Understanding is step one. Experimenting and seeing if changes bring things closer to the sound you desire is next Then work on refining and perfecting Also maybe check this vid ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=kE4J84B2Q2JfUXbK
You're lookin' good DAVE! I've been testing so many things via sound. I finally put my guard down and said "f*ck" it. I was gonna ignore the "standard" as long as it wasn't a "fundamental". It's been such a journey this past year and you've been there the whole way! Thanks man.
Clear, approachable and fascinating, Dave! Thank you for continuing to selflessly share your knowledge to help others! Curious if applying a slightly different system EQ to Left and right as well as adding a bit of delay would also mitigate the comb filtering issues?
This makes great sense, Dave. I'm loving finally getting my feet wet with Smaart v9 and this is a feature I was unaware of. Thanks for the clever way to explain the audio challenges we all face whether we realize it or not.
Fair enough. There are many reflections and echoes in nature, but nowhere in nature is the exact same sound radiated from multiple points in space. Even a perfect reflection recombining with the original is a scenario that differs dramatically from the same sound being radiated from 2 or more points in space. Sound changes as it travels over distances. Sound drops 6db for doubling of distance . A sound source 10 feet away will drop 6db if you move 20 feet away. If that sounds travels 20 feet to a reflective surface and then tavles 20 more feet to the listener the listener would need to move 30 feet farther away to hear the level drop 6 dB. The original sound and reflected sound are not the same. Additionally the impact of the reflection alters the soun source as well. But, with reproduced audio, we generate clones of the exact same sound from multiple points in space. This is an unnatural event.
Awesome explanation/demonstration. I teach AV and Comb filtering is a topic I cover. I usually try to reproduce comb filtering in the most natural environment that the students would experience it. To be honest though, this cuts right to the bone. It covers CF and Phase issues in a simple exercise.
Thanks. Just what I need to show some folks. PS! I have noticed in ActualNature®™: Surf - and a neaby cliff! Move about and you get lovely comb filter sounds... cool.
Yes you can get a nice reflection it'll get you some phasing in nature. I thought about that. My reasoning for excluding that was the reflection isn't a sound source radiating sound though I'm with you that there is a gray area
Thanks, Dave! One of the best sound engineering vids I’ve seen in a long time. I’m trying to “level-up” the sound at our church. It’s tough with a mostly volunteer Production Team. In past lives, I’ve definitely been guilty of “settling” for what is essentially a mono mix. Recently , I started panning inputs so they sit more accurately in the stereo field (finally came to the top of the priority list). It definitely makes a discernible improvement. I’ll also try your drum mic tips.
I've put together a workshop teaching musicians how to make the most of in-ear monitor mixes (for situations where they're mixing their own, or mixing for other people as a monitor engineer), and I reference your base concept that multiple unrelated sound sources radiating from a single point is unnatural when I'm talking about driver efficiency and how to pick and choose (and place) things in a mix. It's a super important concept when talking about sound reproduction, thanks for giving me a great way to talk about it when teaching others! I link to the original video in my "learn more" links at the end of the slides, I'll have to update it to link to this version!
Thank you so much and that makes me really happy and is super cool! Yeah I'm going to go through and update a lot of my older videos to make them cleaner and tighter, better sound and faster paced.
Hi Dave. Thanks for taking time to share and demonstrate what you have learned. What's the solution for a single vocal mic which is often the case with the lead/featured singer? One mic centered.
Thanks. Dave I would really appreciate your response to this question, which is why I’ve sent you a monetised super thanks. I have 2 identical pairs of speakers, which I’ve stacked vertically to form a left and right column aray. The top and bottom speakers are powered separately by 2 different power amps, fed by the same single source, via the same single preamp. Have I minimised comb filtering? Your answer does mean a hell of a lot to me and is appreciated!
Cool cool Howard. Hmmm, the answer is yes and no. Using a different power amp, pre amp, or anything else that does not drastically alter the audio signal, will not reduce comb filtering. Comb filtering happens when two identical or similar signals are combined either acoustically or electrically and there is a time offset between the signals. The time offset can occur electronically, which is not an issue for you as the electronic paths are nearly identical. Or the time offset can occur acoustically. So as you move around, if you get closer to one sound source (speaker) and farther from another sound source, you will have comb filtering in any listening position where you are not equidistant from the sound sound sources. By placing the speakers vertically though you have minimized the combo filtering in two ways. One way is that we tend to move our heads (rotate head) and bodies horizontally around a space more than vertically, and by placing the speakers vertically, changes in comb filtering as perceived by the listener will change much less that if speakers are placed horizontally Secondly, we tend to have a more difficult time hearing comb filtering issues when the comb filtering is due to a vertical interaction. Pretty much for the same reasons above but also because our ears are horizontally offset on our heads. Our vertical perception is more controlled by the shape of our ears and is a less precise perception. playing pink noise through the system and turning various speakers on an off should allow you to hear comb filtering. Also know that comb filtering caused by sources that are spaced farther apart is typically not an issue and is often desirable, as in comb filtering caused by last/right stereo speaker interactions. Comb filtering caused by sound sources that are very close together tends to be undesirable as in - two speakers placed close together reproducing the same or similar signals. Cool cool, hope that helps and thank you!
@@DaveRat thank you for that Dave. Echos much of my own experience. Though the likes of Danny Richy from GR Research and Matthew Pose fron Pose Acoustics and Paul from PS Audio, paint a very different picture. What puzzles me is that each of my speaker cabinets house 4 bass drivers in a column array. Would this in itself, not cause comb filtering? 4 identical drivers in close proximity producing the same signal. You see, I don’t believe I have a notable issue with comb filtering. My left and right speaker stack are 10.5 feet apart and 7.5 feet tall. As I walk around the room, there is very little shift in tonal balance. My listening seat is in a sweet spot, but it’s very broad. Could this be explained by the fact that bass is omnidirectional and my mid-range and high frequency drivers are dual concentric, point source! Thanks again for your interaction. It’s appreciated.
Comb filtering which is a series of frequency dependant summations and cancellations caused offset of arrival times from two or more identical or similar signals is related to 3 main factors. We can as a series of questions to determine whether comb filtering will occur at a significant level at a listening pisition 1) can I hear the sound from more than on source from the listening pisition? If yes then 2) are the volume levels of the two sources heard at similar levels? If the offset in level is quite large, then the comb filtering at that position will be minimal, if the level are similar then 3) how far apart are the sources in relation to the highest frequency those sources reproduce? If it's a two tweeters the go to 20k where the highest frequency wavelength they reproduce of 20k is about 1/2", then the tweeter need to be really really close, like a fraction of 1/2" apart to eliminate comb filtering For a woofer where the highest frequency it reproduces is say, 200Hz which has a wavelength of 5.65 feet, the spacing between the woofers should be a fraction of 5.65 feet. Typically spacings of 1/4 wavelength edge of cone to edge of cone would be the maximum to have minimal comb filtering. Spacing of 1/8th wavelength or less is optimal.
I recently became fascinated with how eq creates all kinds of things I don't understand yet, but it's fun to shape the sound by sending different eq to left right and center in the mix. Specifically with guitar and Bass. I doubt it would be that fascinating in a live situation, but for our weekly live jams I mixdown, on headphones and in the car it makes that garage band sound hit a little more like a finished mix. I look forward to the day we track songs and I can have them play the same parts through different cabinets. That's when the magic will happen
Thanks for a great demonstration of comb filtering. Much better than my efforts trying to explain it using pen and paper! Any tricks for reducing comb filtering on a single source such as spoken word in a live environment or is that impossible??
I'm covered the subject another videos. Ranging from a video on different types of panning as well as a video on L- acoustics L-Isa system that I did with Kyle from Audio University. There are things that can be done but they're not simple and range in their amounts of effectiveness. Look into de correlating
I love your videos. I have been doing sound for over 25 years but I have learned so much from you especially the science behind the process. It’s great work. My question is if I’m setting up and haVE 2 SETS OF SPEAKERS OR MULTIPLE SPEAKERS IN DIFFERENT LOCATIONS TO AVOID COMB FILTERING CAN I add different amounts of delay to make the signals different or that would not work? I’m talking about .05ms intervals or something like that,
As far as speakers and audio signals are concerned, delay is pretty much the exact same thing as distance once it gets into the real world. So adding a millisecond of delay to a speaker is pretty much the same thing as moving that speaker a foot farther away. So, no adding delay will not reduce comb filtering it'll just change the frequencies that it happens. To reduce comb filtering and decorelate signals, you need to alter the fundamental and harmonic structure of the signal in complex ways. This can be done somewhat naturally by making different parts of an instrument. I recommend experimenting and the way to tell if it's actually working is to polarity reverse one of the signals. If the signals are de correlated then you'll hear no difference when one of the signals or the other signal is polarity reversed and both are turned on at the same time. If the polarity of verse causes a shift in tonality then the signals are not decorrelated
Thanx, much appreciated. I’ll experiment and see what I can do. I appreciate your insight, it makes me a better sound guy. Much RESPECT NY BROTHA D RAT. Blessings!
Hi Dave, really like this concept and have been thinking about creating a one speaker per sound source system for my band. So this could be say 4 point source speakers in a row, each covering one instrument plus sub for lows as req. The challenges I see are speaker positioning (where to position the middle speakers without blocking the band in a smaller gig - overhead truss seems overkill) and coverage angle. With typical coverage angle of 90 degrees in some boxes, without a decent distance to the audience front row some will not have direct highs coverage for all the sources.
This is absolutely fascinating! Referring to the kick double-mic'd and each mic going to a separate speaker; how far to you pan each mic? Total LR? 75/75? Will this work with headphones? Time to experiment. Thanks!
REW also has an option to use uncorrelated noise to left and right. Technically you'd probably wanna 1) Correct L and R individually (decreases headroom asymmetrically tho) 2) Use a mid-side EQ to correct center information measuring L+R with correlated pink noise (needs more than just a hand full of measurement positions)
My method is to use two pink sources panned hard left and right to check coverage slightly panned to check kind of a real world scenario and pan them mono to see what happens for mono sources. Not using the combination of dual sources in the pan pots it's pretty simple to emulate what will happen with the various instruments in the mix
As long as you only use one per side and each of the " one per side " only has a single speaker for each range of frequencies, pinpoint source will be relatively immune. So it will have other challenges like limited flexibility and coverage control. Regardless of who makes it or what they call it as soon as you have more than one speaker recreating the same signal in the same frequency range you encounter versions of these issues. Whether it's the danley paraline which is just a horn loaded line array or using more than one of any of their other products, either on left right or multiples on left or right, again same thing. Coherent point source or in other words your typical home stereo cabinet is A wonderful solution for applications where complex and/or variable coverage control is not a strong requirement. There's a beauty to one box per side and a de-correlated stereo signal. And then we have the realities of sound reinforcement
Time delay is the electronic equivalent of distance. So delaying the left speaker is about the same as just moving the left speaker 10 feet farther away. Power alley will shift left and all the phase issues and conflicting issues will move to the left. The issue is that the signals are identical. You need to fundamentally change or alter the signals Maybe check out vids I have done on decorrelation on my channel and also I did a video with Kyle from Audio University on L-Acoustics L-Isa
Yes and the complexities is so fun and beautiful. The fact that the two signals can be completely different when looked at instantaneously yet their identical and slightly delayed is a beautiful contract. Get to true decorrelation, you need signals that are not only different in the moment but different in the time frame that any listener can experience
This is fascinating stuff and I agree. I continue to think about these things in the venue I serve where I have multiple balconies with dead spaces over and under the balcony floor level. We replaced our console last year (with a full re-wire of our stage) and are looking at replacing our entire PA in the near future. Our integrator is of the mind that it should all just be mono going to all the speakers. With the knowledge I gained from you in videos like this, I couldn't disagree more with him. I currently have two small line arrays and 20 fills. Whoever developed the previous PA solution put the subs behind the stage and I can't raise the lows without killing the musicians as a mono solution to the subs only sums it directly on center stage. My integrator is a great salesman but not the greatest engineer and whenever I bring up modeling the PA in the development stage he waffles on it. He says he'll bring the subs out front, but at those frequencies that doesn't eliminate summing on stage. Currently, you can stand anywhere in the room, move three feet in any direction and the bass will change dramatically. In a single spot, if the bass plays a different note (which is what happens when you're playing music) one note will sum and another will cancel even though he's playing both at the same intensity. I absolutely hate it. I don't want more of the same. I'm convinced that some of the feedback anomalies I experience from time to time are due to interference patterns (comb filtering effects) on stage from the line arrays.
Thanks Dave, as usual your explanations are perfectly worded. I knew all this, but never tried to put into words thus so many conflicts with musicians trying to explain why their setups are bad, maybe this helps. :) I was just thinking about maybe 5 horizontal arrays on each side and driving each of them with a different source, drums and bass on the inner one, then guitars on the next, then keys and vocals, and so on. Has anyone done this before?
Hi I've nicknamed my rig never had two speakers the same because I work at the other end of the scale to you but different speakers also helps😊 I'm mixing the same style as you and get very good results open mic and bands in the river bed
Very interesting. The only thing that comes to mind is if you give different signals to subs to reduce interference, how does this affect the db levels and coverage throughout the whole area?
There are many options and I've done several videos on that. Primarily stereo subs gives you a lot more control and mono subs guarantees that you have all these issues
I wonder, can you achieve the desired effect not using multiple mics but by running a very subtle effect like a bypassed analog rack mount device that colors the sound gently, and then just have two different devices, one for each left and right channels?
De correlation is a complex adventure. Maybe check out my Reddit response number four video with another demo and also the videos I did on real sounding speakers if you want more info
Dave, how do you use that new multiple soundgenerator thingy from Smaart 9, for example with a 5 box line-array on each side and 4 or 6 subs where ever you want them to be placed, outside? And what about a simple stereo 2way heads and 2 subs im a small 300 places venue?
Use two pink noise generators and if you pan the mono they'll combine to be a single source and with them panned mono you can measure and hear the interactions between left and right. Then pan one pink noise generator all the way to the left and the other all the way to the right speakers and now you can listen to coverage and measure coverage without any interaction or phase issues. With them pan mono you'll be able to hear and measure what mono instrument sound like that are sent to both left and right. With them panned hard left and hard right he'll be able to hear and measure what happens to things like hard panning overheads. And if you pay on your pink noise like 10:00 2:00 or 3:00 9:00 then you can hear and measure what softly and sources sound like
Something to play with and see if it gives a sound ya like. Try double mic panned mono and then pan more and more stereo see how it works for ya. There is no right way. Just some ways that get ya closer to what you seek than others
@@DaveRat unfortunately as a weekend warrior there’s often little time to play and experiment :( i’m always super creative in my head but in reality i stick with what i was taught. The bands are more comfortable if they see you doing things every other soundperson has done for them last time as well. The best shows are when the band is focussing on themselves and the crowd, not on how they sound.
I am part of a duo. We use tracks. Should we duplicate the tracks to 2 channels and use the pan left/right to make the left right outputs different to avoid comb filtering. And should we be using panning of voices and our guitars to cut down on comb filtering as well? Thanks.
You don't need to totally eliminate comb filtering it's more about understanding what causes it and using it selectively into your advantage. Now that you Little bit better idea of what's going on, it's time to experiment and find the sounds that you want. Perhaps implementing double-making and such on some things to increase its stereo image and reduce center summations and such
Bro! What was that DSP unit that did decoralation electronically. It seemed like it worked well. What are your thoughts on it? I think it was in that video with you on someone else's channel with a bunch of small line arrays.
You don't need everything decorrelated, but getting some things decorrelated is way better than nearly nothing decorrelated. Also, stereo vocal efx will soften the total non decorrelated nature of a vocal mic. It's about understanding what is happening and making improvements, like taking a shower. No matter how many showers you take, your never be 100% clean and free of all bacteria and dirt. But mixing pure mono is like taking no showers. And that is not ideal. I cover much more on the topic in my decorrelati9n videos and vids on building a real sounding speaker And also, this vid I did with Kyle at audio University ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=kE4J84B2Q2JfUXbK
So I guess the first manufacturer that produces a mic with 2 separate capsules as physically close together as possible would let you feed two separate decortelated channel feeds to be panned hard left and right ? Maybe we might start to see another wave of two mics taped together just like the old days before widely available splitters or reverse polarity feedback reduction a la Grateful Dead Wall of Sound ?
Using two mics does not create de correlation directly. Using two mics extremely close together actually is pretty much the same as one mic Decorrelating signals and getting the sound remotely the same is actually quite challenging Maybe check out my video " Reddit responses 4" where I do a decoration test as well as audio university UA-cam channel I did a video with Kyle there at L-acoustics on decoilation and their L-Isa syatem
I wish it was that simple. Capturing de-correlated sounds from a single instrument is quite challenging. Any version of the same signal will have correlation. Changing delay or the direction that the mic is pointed just captures a slightly different sounding version of a correlated signal. To capture capture or create de correlated sounds you need to capture different parts of the instrument that have different resonances and different overtones occurring at different times with different phase relationships and even then the fundamental tones created by the instrument will be somewhat correlated. Maybe check out the video I did called Reddit responses number 4 And also a video that Kyle from Audio University did with me on Lacoustics L-lisa which actually has a de correlation algorithm built into it
Should I use stereo vocal mics? Haha. I laugh but it would likely help phase issues between the left and right hang. Or is the better solution an LCR rig?
It's not about perfection is about mitigation and minimizing issues. If you can decorrelate many or most inputs then laying the vocals right in the middle has a perfect little home
dave rat has had plastic comb filter surgery 0:18 that is correct nature 0:301:13 even with pink noise the speakers don't or do they , they should sound frequency the same , just not enough 20,000 + 200 peq filters , look at the messy state of each compression driver they are not mirror matched the , REW shows a not same copy of each frequency is messy , need a lot of filters to get each driver to look the same with the overlays 2:00 take rain , movie mixes and worst i heard is dolly labs atmos 3d sound , yeah sure its 3d it has no discrete below surround arrays to even convince me of rain sound in outdoor scenes , rain free falls silent from the clouds and only makes the sound individually once it lands on surfaces has always dave rat explains the logic with such simplicity , thanks
Have you delayed the left side slightly then all your cancellations and summations will move to the left slightly. When you have identical signals the center summation point will be where the listening position is equidistant from the sources. Delaying one side is the same thing as moving one speaker a little bit farther away it just moves the equidistant point off to one side or the other. All the cancellations in comb filtering stay I just move towards the delayed side
hi dave i recently got back into proaudio after 10 yrs away and im trying to learn and catch up i saw you did 4 subs in a square for a few second in one video and wondered if you would expound on that setup im thinking of getting 4 QSC KS212c cardioid subs would they work in that config or what would you recommend?
The answer is more in depth than I can do in a reply here. But I have done numerous videos on subs with loads of information. And if you can't find enough in the public side of my UA-cam channel, on the member side which is like $5 a month there is even more in-depth sub videos that cover numerous aspects from all angles. You could join the members for a month binge everything and then cancel and have the best $5 of knowledge you've ever spent
Yeah if you're striving to make your sound extremely realistic and natural and you have a very high quality controlled environment going to the extremes may be beneficial. But double micing in itself doesn't actually solve the issue it just helps a little bit. Check out this video for a little more info ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=YCPdjjlQZ8xqQohc And also maybe check out the videos I did on "real sounding speakers"
this is required viewing/listening for anybody working in audio, in any context. got me thinking about ways to de-correlate multiple instances of the same audio source (same sound going to two speakers) as well as how to de-correlate two speakers reproducing the same audio source. noise-based (i.e. random) phase, frequency & amplitude modulation, below any threshold of noticeability, might be one way... a decorrelation signal, masked by the main audio, & applied to any duplication of that signal, in much the same way as your two non-correlated pink noise sources. is anybody doing this?
Love u bunches . Before I keel over, how does an old fart like me attend Coachella? I don’t think I could frolic with that crowd, even though it’s certainly on my bucket list. Just to sit behind a drum kit for a sound check, see the palm trees.
Hmmm, well, I can't get ya behind a drum kit unless you are friends with someone playing. If you make it on site, I will give ya a sound system and stage tour. The issue with Coachella is lodging is booked for 50 miles in all directions and crazy expensive and no one wants to rent a room for less than the 3 or 4 days. We rent houses for a month 8 months ahead of time and even that is an expensive challenge
man you beat me to the punch... i recorded a full video where i talk about exaclty this and using it for Monitors and sources also for people using IEM and wedges at the same time. i'm editing asd we speaks, should be out tomorrow...
@@DaveRat After watching your videos, I got into learing physics of sound. I have no schooling background so please. Don"t juge me too hard, its also the first time I try that style of video. thanks for everything Dave! ua-cam.com/video/2aExZqv1Fs8/v-deo.html
So no merit in the idea. What if the two capsules weren’t identical in freq/phase response ? If they were in some way complimentary to each other and produced a sufficiently de correlated sound ?
It's a complex adventure. Just using two mics will not create decorrelated signals. And you can't get 100% de correlated unless the two signals are completely unrelated and multiple mics on a single instrument are inherently related. You can increase the amount of de correlation by putting mics on different parts of the instrument, to pick up different overtones and resonances with each mic, using different mic types like a ribbon and a dynamic will introduce linearity differentials, using different polar patterns, different distances, different EQs that introduce differing phase relationships, we all reduce the correlation. Also what I'm saying here is that two identical correlated sounds is unnatural. I never said that unnatural is bad or good he just doesn't exist in nature and if your goal is to have things sound realistic then it's counterproductive. If your goal is extremely smooth coverage throughout an area without a big summation bump midway between the speakers then increasing the de correlation will be beneficial. Maybe look at my other videos that I've talked about de-correlation and also the videos I did on real sounding speakers
Bose was brilliant. Innovative and an inspiration. Did you know that Bose donated the entire company to MIT University? The innovations that those came up with were very forward thinking, much of the marketing was done poorly for a while and is a bad reputation to the casual observer. Put some deeper research reveals the two brilliance of his designs and innovations The 901 was not a HiFi sounding speaker but what he was trying to achieve was to create A more realistic sound. Bose had realized that realizm in reproduced sound is highly dependant on complex reflections of the sound. Since HiFi speaker pile all the instruments into 2 speaker, which destroys all the needed room reflections for realism, he made the 901 to face the wall and try to emulate some or more of those reflections. Cool concepts and super innovative for the time Check out my videos on "real sounding speakers"
@@DaveRat I do not disagree with you, that was not a Bose dis at all, I have a pair of Series IV's with the 551 Spatial Control Receiver. I was just noticing the connection between you talking about multiple speakers and arrival times and such with your series I or II up there and their speakers going 3 different directions. thank you. Of course I noticed the Crown.
@@DaveRat I bought the Bose flagship computer set up in 2008. It was definitely a great purchase for $400 dollars. We eventually destroyed it at the machine shop that I worked at. It said on the back 300 watts and it definitely pumped out the music and kept up with the cnc milling machines in the shop.
Now THIS is what sound "engineering" is all about. A true sound/audio engineer intimately knows about the physics of the medium they deal with. Thanks for the video Dave!
Wonderful and thank you!
One of the simplest demonstration but most effective have ever seen. Thank you.
Super cool and thank you
Dave rat videos always let me know I’ve been going about sound engineering a good way my entire career. Love these videos😊
🤙👍🤙
Its really handy that Smaart added multiple pink noise generators, great suggestion! Thanks for the demonstration also.
👍🤙👍
This is awesome. You have this uncanny ability to simplify complex matters.
Thank you and I really appreciate that
I remember watching the original video before this remaster. Probably the most beneficial piece of knowledge I've gained and applied to sound system tuning. Quick on the fly equalization of the front of house system is so much easier with signal generators for left and right mains. I get a clear comb filter-free reading when using this method. On a good day I'll get a left right and sub set up tuned in 15 to 20 minutes. That may seem long to many but I'm still using old RTA technology. Haven't had the time to convert to OSM or purchase Smaart.
I love that
I remember when this video came out it blew my mind and started implementing it as much as possible. Thank you David.
🤙🎤🎛️🎛️🎛️🎤🎤🎤🔧🔧🔧
Dave was so spot on by having two different arrays. One for vocals and some instruments (?) and the other array for the rest. The only way in my humble opinion to do it. ❤
It definitely helps but you can also do well by de-correlating and multiple micing.
such good info in so little time--thanks dave.
🎛️🎛️🎛️🎛️🎸🥁🥁👄🎤🎛️🎛️🔧🔧
Awesome video thanks that gives me a way better understanding of stereo and how it's affected
👍🔧👍🎛️🎛️🔧🔧🤙👍
Always appreciate you sharing this information freely and in an easy to understand manner, Dave. Makes a ton of sense.
Awesome thank you
Amazing demonstration. Because it is simple.
🎤🎤🎤🎛️🎛️🎛️🔧🔧🔧
I can’t give my usual money, my prescription payment was a little high today. Love you so much. ❤Arthur
Awesome and thank you Arthur!
Love the pic you sent. DK is sorting you for some cool stuff!!
@@DaveRat I know you’re busy and I’m a nobody but I really would appreciate your help. I’ve just ordered new ultimate ear ue-18 pros for me as a drummer. Unfortunately, since I no longer play professionally, I don’t have a clue to make them sound great. In the past I had a rack unit on stage, click track in my left ear, using a Shure Psm-900 wired belt pack, and I. My right ear was together band, my vocals through a Beyer m-88 to or a crown cm 311, which I didn’t care for. Any advice please or perhaps help with purchasing a used micro wedge as my fail safe? As a drummer, Chad and RCP , darn their music has gotten me through horrible times. I’ll send more money when I can, love Arthur
So sorry for typos. I have a pick line in that really isn’t painless lol.
No I really appreciate it you really don't need to send me money. These videos and sharing of info is a hobby for me that I do to help others. Financially I'm doing just fine from my other endeavors. Save your money for sound gear!
@@DaveRat may you please advise me on my monitoring question. Please?
Genuine question I’m not being cheeky: This information is so eye-opening, but how do I practically apply this to my gig?
I work in a small club with an X32 most of the time. How do I implement getting a different signal to each side? I can’t just mic up everything twice because I have limited mics and inputs. Some things just have to be mono.
Is there a more practical way to solve this issue when having a stereo source from everything isn’t an option? Like maybe a plugin/insert on the Main L/R out? Can I maybe EQ or delay each side slightly different?
Keep things in perspective. With smaller systems, in smaller rooms, stage sound is a significant part of the audio that naturally softens many of the issues of same sound sent to speakers.
Maybe watch my vids on panning types. Understanding is step one. Experimenting and seeing if changes bring things closer to the sound you desire is next
Then work on refining and perfecting
Also maybe check this vid
ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=kE4J84B2Q2JfUXbK
Aaah Dave You are a Legend, sharing with us all these details. We appreciate you
Super glue and thank you Hennie!!
You're lookin' good DAVE!
I've been testing so many things via sound. I finally put my guard down and said "f*ck" it. I was gonna ignore the "standard" as long as it wasn't a "fundamental".
It's been such a journey this past year and you've been there the whole way! Thanks man.
Thank you and much appreciated
That was a fantastic demonstration. Thanks.
🤙🎤🤙
Appreciate you so much! What a great hands on noise example! You gave me some ideas here to play with in my modular synth. Love learning from you! 🙏
Wonderful and thank you!
That was an awesome demonstration! I appreciate the teachings
🎛️🎛️🎛️🎤🔧
Absolutely brilliant, Dave. Thank you! Great teaching style and phenomenal logic.
👍🎛️🔧🎛️
Clear, approachable and fascinating, Dave! Thank you for continuing to selflessly share your knowledge to help others!
Curious if applying a slightly different system EQ to Left and right as well as adding a bit of delay would also mitigate the comb filtering issues?
Marginally.to really decorrelated,, , EQ is not enough and delay just moves the summation center off to one side.
This makes great sense, Dave. I'm loving finally getting my feet wet with Smaart v9 and this is a feature I was unaware of. Thanks for the clever way to explain the audio challenges we all face whether we realize it or not.
Awesome! And I was excited when I got the call from Smaart telling they included it because of me. Honored!!!
Wow!!! Very, very interesting! Thanks, Dave!!
🔊🔊🎤🎛️🎛️🎛️🎤🔊🔊🔧
@0:30 "Nowhere in nature"... Thunder reflected off hills can give comb filter effect.
Fair enough. There are many reflections and echoes in nature, but nowhere in nature is the exact same sound radiated from multiple points in space.
Even a perfect reflection recombining with the original is a scenario that differs dramatically from the same sound being radiated from 2 or more points in space.
Sound changes as it travels over distances. Sound drops 6db for doubling of distance . A sound source 10 feet away will drop 6db if you move 20 feet away. If that sounds travels 20 feet to a reflective surface and then tavles 20 more feet to the listener the listener would need to move 30 feet farther away to hear the level drop 6 dB.
The original sound and reflected sound are not the same.
Additionally the impact of the reflection alters the soun source as well.
But, with reproduced audio, we generate clones of the exact same sound from multiple points in space. This is an unnatural event.
Always great content - cogent and concise
🤙🎛️🎤🤙
Man, I learned something today. I just recently started learning about this subject. Thank you
🤙🔧🤙
Awesome explanation/demonstration. I teach AV and Comb filtering is a topic I cover. I usually try to reproduce comb filtering in the most natural environment that the students would experience it. To be honest though, this cuts right to the bone. It covers CF and Phase issues in a simple exercise.
🎛️🎛️🎛️🔧🔧🔧😁
Thanks. Just what I need to show some folks.
PS! I have noticed in ActualNature®™: Surf - and a neaby cliff! Move about and you get lovely comb filter sounds... cool.
Yes you can get a nice reflection it'll get you some phasing in nature. I thought about that. My reasoning for excluding that was the reflection isn't a sound source radiating sound though I'm with you that there is a gray area
Thanks, Dave! One of the best sound engineering vids I’ve seen in a long time. I’m trying to “level-up” the sound at our church. It’s tough with a mostly volunteer Production Team. In past lives, I’ve definitely been guilty of “settling” for what is essentially a mono mix. Recently , I started panning inputs so they sit more accurately in the stereo field (finally came to the top of the priority list). It definitely makes a discernible improvement. I’ll also try your drum mic tips.
Awesome and let me know how it goes
I've put together a workshop teaching musicians how to make the most of in-ear monitor mixes (for situations where they're mixing their own, or mixing for other people as a monitor engineer), and I reference your base concept that multiple unrelated sound sources radiating from a single point is unnatural when I'm talking about driver efficiency and how to pick and choose (and place) things in a mix. It's a super important concept when talking about sound reproduction, thanks for giving me a great way to talk about it when teaching others! I link to the original video in my "learn more" links at the end of the slides, I'll have to update it to link to this version!
Thank you so much and that makes me really happy and is super cool!
Yeah I'm going to go through and update a lot of my older videos to make them cleaner and tighter, better sound and faster paced.
Very interesting stuff. Now i get to go experiment with this concept. Fun!
Excellent, let me know how it goes
Great lesson!
🤙🔊🔊🔊🔊🤙
i love you dave you're insane in a good way
👍🎛️🔧🎛️👍 thank you!
Hi Dave. Thanks for taking time to share and demonstrate what you have learned. What's the solution for a single vocal mic which is often the case with the lead/featured singer? One mic centered.
Thanks. Dave I would really appreciate your response to this question, which is why I’ve sent you a monetised super thanks. I have 2 identical pairs of speakers, which I’ve stacked vertically to form a left and right column aray. The top and bottom speakers are powered separately by 2 different power amps, fed by the same single source, via the same single preamp. Have I minimised comb filtering? Your answer does mean a hell of a lot to me and is appreciated!
Cool cool Howard. Hmmm, the answer is yes and no.
Using a different power amp, pre amp, or anything else that does not drastically alter the audio signal, will not reduce comb filtering. Comb filtering happens when two identical or similar signals are combined either acoustically or electrically and there is a time offset between the signals. The time offset can occur electronically, which is not an issue for you as the electronic paths are nearly identical.
Or the time offset can occur acoustically. So as you move around, if you get closer to one sound source (speaker) and farther from another sound source, you will have comb filtering in any listening position where you are not equidistant from the sound sound sources.
By placing the speakers vertically though you have minimized the combo filtering in two ways.
One way is that we tend to move our heads (rotate head) and bodies horizontally around a space more than vertically, and by placing the speakers vertically, changes in comb filtering as perceived by the listener will change much less that if speakers are placed horizontally
Secondly, we tend to have a more difficult time hearing comb filtering issues when the comb filtering is due to a vertical interaction. Pretty much for the same reasons above but also because our ears are horizontally offset on our heads. Our vertical perception is more controlled by the shape of our ears and is a less precise perception.
playing pink noise through the system and turning various speakers on an off should allow you to hear comb filtering.
Also know that comb filtering caused by sources that are spaced farther apart is typically not an issue and is often desirable, as in comb filtering caused by last/right stereo speaker interactions.
Comb filtering caused by sound sources that are very close together tends to be undesirable as in - two speakers placed close together reproducing the same or similar signals.
Cool cool, hope that helps and thank you!
@@DaveRat thank you for that Dave. Echos much of my own experience. Though the likes of Danny Richy from GR Research and Matthew Pose fron Pose Acoustics and Paul from PS Audio, paint a very different picture. What puzzles me is that each of my speaker cabinets house 4 bass drivers in a column array. Would this in itself, not cause comb filtering? 4 identical drivers in close proximity producing the same signal.
You see, I don’t believe I have a notable issue with comb filtering. My left and right speaker stack are 10.5 feet apart and 7.5 feet tall. As I walk around the room, there is very little shift in tonal balance. My listening seat is in a sweet spot, but it’s very broad. Could this be explained by the fact that bass is omnidirectional and my mid-range and high frequency drivers are dual concentric, point source!
Thanks again for your interaction. It’s appreciated.
Comb filtering which is a series of frequency dependant summations and cancellations caused offset of arrival times from two or more identical or similar signals is related to 3 main factors.
We can as a series of questions to determine whether comb filtering will occur at a significant level at a listening pisition
1) can I hear the sound from more than on source from the listening pisition?
If yes then
2) are the volume levels of the two sources heard at similar levels?
If the offset in level is quite large, then the comb filtering at that position will be minimal, if the level are similar then
3) how far apart are the sources in relation to the highest frequency those sources reproduce?
If it's a two tweeters the go to 20k where the highest frequency wavelength they reproduce of 20k is about 1/2", then the tweeter need to be really really close, like a fraction of 1/2" apart to eliminate comb filtering
For a woofer where the highest frequency it reproduces is say, 200Hz which has a wavelength of 5.65 feet, the spacing between the woofers should be a fraction of 5.65 feet.
Typically spacings of 1/4 wavelength edge of cone to edge of cone would be the maximum to have minimal comb filtering. Spacing of 1/8th wavelength or less is optimal.
@@DaveRat 👍🍻
I recently became fascinated with how eq creates all kinds of things I don't understand yet, but it's fun to shape the sound by sending different eq to left right and center in the mix. Specifically with guitar and Bass. I doubt it would be that fascinating in a live situation, but for our weekly live jams I mixdown, on headphones and in the car it makes that garage band sound hit a little more like a finished mix. I look forward to the day we track songs and I can have them play the same parts through different cabinets. That's when the magic will happen
Super cool. Maybe check out that video I did on different types of panning. I cover that and other interesting aspects as well
Thanks for a great demonstration of comb filtering. Much better than my efforts trying to explain it using pen and paper!
Any tricks for reducing comb filtering on a single source such as spoken word in a live environment or is that impossible??
I'm covered the subject another videos. Ranging from a video on different types of panning as well as a video on L- acoustics L-Isa system that I did with Kyle from Audio University.
There are things that can be done but they're not simple and range in their amounts of effectiveness.
Look into de correlating
Ohhhhh. Going to be an interesting one!
🎛️🔧🎛️
I love your videos. I have been doing sound for over 25 years but I have learned so much from you especially the science behind the process. It’s great work. My question is if I’m setting up and haVE 2 SETS OF SPEAKERS OR MULTIPLE SPEAKERS IN DIFFERENT LOCATIONS TO AVOID COMB FILTERING CAN I add different amounts of delay to make the signals different or that would not work? I’m talking about .05ms intervals or something like that,
As far as speakers and audio signals are concerned, delay is pretty much the exact same thing as distance once it gets into the real world.
So adding a millisecond of delay to a speaker is pretty much the same thing as moving that speaker a foot farther away.
So, no adding delay will not reduce comb filtering it'll just change the frequencies that it happens.
To reduce comb filtering and decorelate signals, you need to alter the fundamental and harmonic structure of the signal in complex ways.
This can be done somewhat naturally by making different parts of an instrument.
I recommend experimenting and the way to tell if it's actually working is to polarity reverse one of the signals. If the signals are de correlated then you'll hear no difference when one of the signals or the other signal is polarity reversed and both are turned on at the same time.
If the polarity of verse causes a shift in tonality then the signals are not decorrelated
Thanx, much appreciated. I’ll experiment and see what I can do. I appreciate your insight, it makes me a better sound guy. Much RESPECT NY BROTHA D RAT. Blessings!
Love it!
👍🤙👍
Hi Dave, really like this concept and have been thinking about creating a one speaker per sound source system for my band. So this could be say 4 point source speakers in a row, each covering one instrument plus sub for lows as req. The challenges I see are speaker positioning (where to position the middle speakers without blocking the band in a smaller gig - overhead truss seems overkill) and coverage angle. With typical coverage angle of 90 degrees in some boxes, without a decent distance to the audience front row some will not have direct highs coverage for all the sources.
This is absolutely fascinating! Referring to the kick double-mic'd and each mic going to a separate speaker; how far to you pan each mic? Total LR? 75/75? Will this work with headphones? Time to experiment. Thanks!
Maybe watch some of my other videos on decorrelation and panning for more info
REW also has an option to use uncorrelated noise to left and right.
Technically you'd probably wanna
1) Correct L and R individually (decreases headroom asymmetrically tho)
2) Use a mid-side EQ to correct center information measuring L+R with correlated pink noise (needs more than just a hand full of measurement positions)
My method is to use two pink sources panned hard left and right to check coverage slightly panned to check kind of a real world scenario and pan them mono to see what happens for mono sources.
Not using the combination of dual sources in the pan pots it's pretty simple to emulate what will happen with the various instruments in the mix
Thoughts on Danleys solution to this? (coherent point-source)
I love your simple and straight forward demonstrations. Thanks for the effort and video!
As long as you only use one per side and each of the " one per side " only has a single speaker for each range of frequencies, pinpoint source will be relatively immune. So it will have other challenges like limited flexibility and coverage control.
Regardless of who makes it or what they call it as soon as you have more than one speaker recreating the same signal in the same frequency range you encounter versions of these issues. Whether it's the danley paraline which is just a horn loaded line array or using more than one of any of their other products, either on left right or multiples on left or right, again same thing.
Coherent point source or in other words your typical home stereo cabinet is A wonderful solution for applications where complex and/or variable coverage control is not a strong requirement.
There's a beauty to one box per side and a de-correlated stereo signal.
And then we have the realities of sound reinforcement
Great video! Could slightly delaying one side alleviate phasing issues?
Time delay is the electronic equivalent of distance.
So delaying the left speaker is about the same as just moving the left speaker 10 feet farther away.
Power alley will shift left and all the phase issues and conflicting issues will move to the left.
The issue is that the signals are identical. You need to fundamentally change or alter the signals
Maybe check out vids I have done on decorrelation on my channel and also I did a video with Kyle from Audio University on L-Acoustics L-Isa
@@DaveRat This makes sense. Thank you for the answer! I’ll check out those vids.
Yes and the complexities is so fun and beautiful. The fact that the two signals can be completely different when looked at instantaneously yet their identical and slightly delayed is a beautiful contract.
Get to true decorrelation, you need signals that are not only different in the moment but different in the time frame that any listener can experience
This is fascinating stuff and I agree. I continue to think about these things in the venue I serve where I have multiple balconies with dead spaces over and under the balcony floor level.
We replaced our console last year (with a full re-wire of our stage) and are looking at replacing our entire PA in the near future. Our integrator is of the mind that it should all just be mono going to all the speakers. With the knowledge I gained from you in videos like this, I couldn't disagree more with him. I currently have two small line arrays and 20 fills. Whoever developed the previous PA solution put the subs behind the stage and I can't raise the lows without killing the musicians as a mono solution to the subs only sums it directly on center stage.
My integrator is a great salesman but not the greatest engineer and whenever I bring up modeling the PA in the development stage he waffles on it. He says he'll bring the subs out front, but at those frequencies that doesn't eliminate summing on stage. Currently, you can stand anywhere in the room, move three feet in any direction and the bass will change dramatically. In a single spot, if the bass plays a different note (which is what happens when you're playing music) one note will sum and another will cancel even though he's playing both at the same intensity. I absolutely hate it. I don't want more of the same. I'm convinced that some of the feedback anomalies I experience from time to time are due to interference patterns (comb filtering effects) on stage from the line arrays.
Sounds like you need a new integrator because getting relatively smooth sub coverage and reducing the volume on stage are very solvable adventures
Thanks Dave, as usual your explanations are perfectly worded. I knew all this, but never tried to put into words thus so many conflicts with musicians trying to explain why their setups are bad, maybe this helps. :)
I was just thinking about maybe 5 horizontal arrays on each side and driving each of them with a different source, drums and bass on the inner one, then guitars on the next, then keys and vocals, and so on. Has anyone done this before?
Look up L-Acoustics L-Isa and also I did a vid with Kyle at audio University on exactly that
Hi I've nicknamed my rig never had two speakers the same because I work at the other end of the scale to you but different speakers also helps😊
I'm mixing the same style as you and get very good results open mic and bands in the river bed
🎛️🎤🔧😁
Very interesting. The only thing that comes to mind is if you give different signals to subs to reduce interference, how does this affect the db levels and coverage throughout the whole area?
There are many options and I've done several videos on that. Primarily stereo subs gives you a lot more control and mono subs guarantees that you have all these issues
friggin fantastic
I wonder, can you achieve the desired effect not using multiple mics but by running a very subtle effect like a bypassed analog rack mount device that colors the sound gently, and then just have two different devices, one for each left and right channels?
De correlation is a complex adventure. Maybe check out my Reddit response number four video with another demo and also the videos I did on real sounding speakers if you want more info
Mind blown once again...
🔧👍🔧
Dave, how do you use that new multiple soundgenerator thingy from Smaart 9, for example with a 5 box line-array on each side and 4 or 6 subs where ever you want them to be placed, outside? And what about a simple stereo 2way heads and 2 subs im a small 300 places venue?
Use two pink noise generators and if you pan the mono they'll combine to be a single source and with them panned mono you can measure and hear the interactions between left and right.
Then pan one pink noise generator all the way to the left and the other all the way to the right speakers and now you can listen to coverage and measure coverage without any interaction or phase issues.
With them pan mono you'll be able to hear and measure what mono instrument sound like that are sent to both left and right.
With them panned hard left and hard right he'll be able to hear and measure what happens to things like hard panning overheads.
And if you pay on your pink noise like 10:00 2:00 or 3:00 9:00 then you can hear and measure what softly and sources sound like
Would you doublemic a kick and pan it hard left and right in a 150ppl venue for example? Or does it only make sense in a larger arena/hall/whatever?
Something to play with and see if it gives a sound ya like. Try double mic panned mono and then pan more and more stereo see how it works for ya. There is no right way. Just some ways that get ya closer to what you seek than others
@@DaveRat unfortunately as a weekend warrior there’s often little time to play and experiment :( i’m always super creative in my head but in reality i stick with what i was taught. The bands are more comfortable if they see you doing things every other soundperson has done for them last time as well. The best shows are when the band is focussing on themselves and the crowd, not on how they sound.
I am part of a duo. We use tracks. Should we duplicate the tracks to 2 channels and use the pan left/right to make the left right outputs different to avoid comb filtering. And should we be using panning of voices and our guitars to cut down on comb filtering as well? Thanks.
You don't need to totally eliminate comb filtering it's more about understanding what causes it and using it selectively into your advantage.
Now that you Little bit better idea of what's going on, it's time to experiment and find the sounds that you want. Perhaps implementing double-making and such on some things to increase its stereo image and reduce center summations and such
Bro! What was that DSP unit that did decoralation electronically. It seemed like it worked well. What are your thoughts on it?
I think it was in that video with you on someone else's channel with a bunch of small line arrays.
That would be Kyle from Audio University and the equipment was L-acoustics L-isa
Dave, I don't actually use measurements, but you make total sense here x😊
🔧🎛️🔧
Cool as hell
🎤🤙🎤
2 different kick mics sound simple but how do you go about vocals
Especially if they use a hand held and not in a mic stand?
You don't need everything decorrelated, but getting some things decorrelated is way better than nearly nothing decorrelated.
Also, stereo vocal efx will soften the total non decorrelated nature of a vocal mic.
It's about understanding what is happening and making improvements, like taking a shower.
No matter how many showers you take, your never be 100% clean and free of all bacteria and dirt. But mixing pure mono is like taking no showers. And that is not ideal.
I cover much more on the topic in my decorrelati9n videos and vids on building a real sounding speaker
And also, this vid I did with Kyle at audio University ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=kE4J84B2Q2JfUXbK
@@DaveRat "your never be 100% clean and free of all bacteria and dirt" 😱but I scrub and scrub!
🤙🤙🤙🤙
@@DaveRat gracias
🤙🎛️🎤🔊🤙
Sent this video to my dad who thinks I just fart around with bands for a living. #MindBlown lol
Great video
🎤🎛️🎤🔧🔧🔧😁
So I guess the first manufacturer that produces a mic with 2 separate capsules as physically close together as possible would let you feed two separate decortelated channel feeds to be panned hard left and right ? Maybe we might start to see another wave of two mics taped together just like the old days before widely available splitters or reverse polarity feedback reduction a la Grateful Dead Wall of Sound ?
Using two mics does not create de correlation directly. Using two mics extremely close together actually is pretty much the same as one mic
Decorrelating signals and getting the sound remotely the same is actually quite challenging
Maybe check out my video " Reddit responses 4" where I do a decoration test as well as audio university UA-cam channel I did a video with Kyle there at L-acoustics on decoilation and their L-Isa syatem
dave. what if. just what if. you use xy mic technique per source. vocals, snare. kick. etc
I wish it was that simple. Capturing de-correlated sounds from a single instrument is quite challenging.
Any version of the same signal will have correlation.
Changing delay or the direction that the mic is pointed just captures a slightly different sounding version of a correlated signal.
To capture capture or create de correlated sounds you need to capture different parts of the instrument that have different resonances and different overtones occurring at different times with different phase relationships and even then the fundamental tones created by the instrument will be somewhat correlated.
Maybe check out the video I did called Reddit responses number 4 And also a video that Kyle from Audio University did with me on Lacoustics L-lisa which actually has a de correlation algorithm built into it
Should I use stereo vocal mics? Haha. I laugh but it would likely help phase issues between the left and right hang. Or is the better solution an LCR rig?
It's not about perfection is about mitigation and minimizing issues.
If you can decorrelate many or most inputs then laying the vocals right in the middle has a perfect little home
dave rat has had plastic comb filter surgery 0:18 that is correct nature 0:30 1:13 even with pink noise the speakers don't or do they , they should sound frequency the same , just not enough 20,000 + 200 peq filters , look at the messy state of each compression driver they are not mirror matched the , REW shows a not same copy of each frequency is messy , need a lot of filters to get each driver to look the same with the overlays
2:00 take rain , movie mixes and worst i heard is dolly labs atmos 3d sound , yeah sure its 3d it has no discrete below surround arrays to even convince me of rain sound in outdoor scenes , rain free falls silent from the clouds and only makes the sound individually once it lands on surfaces
has always dave rat explains the logic with such simplicity , thanks
What happens if you send identical sources to each side, but delay one side by a very small, imperceptible amount?
Have you delayed the left side slightly then all your cancellations and summations will move to the left slightly.
When you have identical signals the center summation point will be where the listening position is equidistant from the sources. Delaying one side is the same thing as moving one speaker a little bit farther away it just moves the equidistant point off to one side or the other.
All the cancellations in comb filtering stay I just move towards the delayed side
@@DaveRatthat makes sense... oh well, it was a thought...
Foiled again...
I wish it was that easy but also that would make this nowhere near as fun
hi dave i recently got back into proaudio after 10 yrs away and im trying to learn and catch up i saw you did 4 subs in a square for a few second in one video and wondered if you would expound on that setup im thinking of getting 4 QSC KS212c cardioid subs would they work in that config or what would you recommend?
The answer is more in depth than I can do in a reply here. But I have done numerous videos on subs with loads of information. And if you can't find enough in the public side of my UA-cam channel, on the member side which is like $5 a month there is even more in-depth sub videos that cover numerous aspects from all angles.
You could join the members for a month binge everything and then cancel and have the best $5 of knowledge you've ever spent
@@DaveRat thx 🙏 I will check that out
👍🔧👍
Great and interesting I see comments here people saying they are implementing it so, you guys are double micing and lines on amost everything?
Yeah if you're striving to make your sound extremely realistic and natural and you have a very high quality controlled environment going to the extremes may be beneficial.
But double micing in itself doesn't actually solve the issue it just helps a little bit.
Check out this video for a little more info
ua-cam.com/video/97wg1HKkaro/v-deo.htmlsi=YCPdjjlQZ8xqQohc
And also maybe check out the videos I did on "real sounding speakers"
Great, thanks Dave I will check it out
🔧🎛️🔧
this is required viewing/listening for anybody working in audio, in any context.
got me thinking about ways to de-correlate multiple instances of the same audio source (same sound going to two speakers) as well as how to de-correlate two speakers reproducing the same audio source.
noise-based (i.e. random) phase, frequency & amplitude modulation, below any threshold of noticeability, might be one way...
a decorrelation signal, masked by the main audio, & applied to any duplication of that signal, in much the same way as your two non-correlated pink noise sources.
is anybody doing this?
Check out the UA-cam videos I did with Kyle at Audio University over it L-Acoustics and the L-Isa system
Love u bunches . Before I keel over, how does an old fart like me attend Coachella? I don’t think I could frolic with that crowd, even though it’s certainly on my bucket list. Just to sit behind a drum kit for a sound check, see the palm trees.
Hmmm, well, I can't get ya behind a drum kit unless you are friends with someone playing. If you make it on site, I will give ya a sound system and stage tour. The issue with Coachella is lodging is booked for 50 miles in all directions and crazy expensive and no one wants to rent a room for less than the 3 or 4 days. We rent houses for a month 8 months ahead of time and even that is an expensive challenge
@@DaveRat Thank you
🤙🎤🤙
man you beat me to the punch... i recorded a full video where i talk about exaclty this and using it for Monitors and sources
also for people using IEM and wedges at the same time. i'm editing asd we speaks, should be out tomorrow...
Super cool, post a link when done
@@DaveRat
After watching your videos, I got into learing physics of sound.
I have no schooling background so please. Don"t juge me too hard, its also the first time I try that style of video.
thanks for everything Dave!
ua-cam.com/video/2aExZqv1Fs8/v-deo.html
If ever youtube needed a stickie, this would be a good one.
That's kind of a fun idea I'll put this as my channel video based on your comment here
Dave and Kyle to Run sound for a night? Epic pro sound in action, You Tube Live concert. Sounds good to me. Let's set it up. 😉
I'm a retired sound guy. Now I enjoy other people doing that job
Enjoy retirement. I love the content you create on UA-cam. Helps out tremendously. Thank you.@@DaveRat
@goodtimejohnny8972 super cool and thank you so much!
0:52 si(moo)taneously 😂😂
🤙👍🤙
Wow!. Dave's real hair color!!
Not any more, I got transition rolling in
@@DaveRat lol! I doubt that very much!! ❤️
Yeah, yay gray
cool
🤙🔊🤙🎤🎛️
So no merit in the idea. What if the two capsules weren’t identical in freq/phase response ? If they were in some way complimentary to each other and produced a sufficiently de correlated sound ?
It's a complex adventure. Just using two mics will not create decorrelated signals.
And you can't get 100% de correlated unless the two signals are completely unrelated and multiple mics on a single instrument are inherently related.
You can increase the amount of de correlation by putting mics on different parts of the instrument, to pick up different overtones and resonances with each mic, using different mic types like a ribbon and a dynamic will introduce linearity differentials, using different polar patterns, different distances, different EQs that introduce differing phase relationships, we all reduce the correlation.
Also what I'm saying here is that two identical correlated sounds is unnatural.
I never said that unnatural is bad or good he just doesn't exist in nature and if your goal is to have things sound realistic then it's counterproductive.
If your goal is extremely smooth coverage throughout an area without a big summation bump midway between the speakers then increasing the de correlation will be beneficial.
Maybe look at my other videos that I've talked about de-correlation and also the videos I did on real sounding speakers
I got very catched by the thumbnail… I can’t confess what came to mind…🤣
I had fun with a thumbnail! I made the video and screwed through it to look for something to use and it just seems so perfect and enjoyable
@@DaveRat great one
And as always great video, definitely trying out something different this weekend
🎤🎤🎤🎤🎛️🎛️🔧🔧🤙
I can’t disagree with any
🎤🎤🎛️🎛️🎤🎤🤙🤙
I want to hear the cow 😶
Me too!
Funny your're doing this video under a pair of Bose 901's.
And a d150 crown amp
@@DaveRatis this good in your opinion? I have always loved Crown amps. Bose not quite as much.
Bose was brilliant. Innovative and an inspiration. Did you know that Bose donated the entire company to MIT University?
The innovations that those came up with were very forward thinking, much of the marketing was done poorly for a while and is a bad reputation to the casual observer.
Put some deeper research reveals the two brilliance of his designs and innovations
The 901 was not a HiFi sounding speaker but what he was trying to achieve was to create A more realistic sound.
Bose had realized that realizm in reproduced sound is highly dependant on complex reflections of the sound.
Since HiFi speaker pile all the instruments into 2 speaker, which destroys all the needed room reflections for realism, he made the 901 to face the wall and try to emulate some or more of those reflections.
Cool concepts and super innovative for the time
Check out my videos on "real sounding speakers"
@@DaveRat I do not disagree with you, that was not a Bose dis at all, I have a pair of Series IV's with the 551 Spatial Control Receiver.
I was just noticing the connection between you talking about multiple speakers and arrival times and such with your series I or II up there and their speakers going 3 different directions. thank you.
Of course I noticed the Crown.
@@DaveRat I bought the Bose flagship computer set up in 2008. It was definitely a great purchase for $400 dollars. We eventually destroyed it at the machine shop that I worked at. It said on the back 300 watts and it definitely pumped out the music and kept up with the cnc milling machines in the shop.
hi
👍
So it's not all about rocknroll is it?!!!🤣🤣🤣
😁🎛️😁