Hey dude, thanks for all of the content you put out. I don't pay attention to all of it, but to the parts I do, they are always super helpful and insightful. Thanks for sharing the knowledge :)
Hello Nathan"When staging a performance in an indoor venue, what strategies can we employ to address the variations in tonality resulting from the room’s unique acoustic properties?"
Hey Robin, can you give me an example. Maybe this happened to you recently? Tell me more about what you mean by changes in tonality. From my perspective it seems like the sound system calibration process should account for this.
@@nathanlively Thanks for your reply ....In my experience, indoor environments tend to exhibit a pronounced feedback effect. This is primarily due to the fact that certain frequencies may be amplified, contingent upon the specific acoustical properties of the room...What strategies should we employ to effectively mitigate such a challenge? To what extent are Finite Impulse Response (FIR) filters practically applicable in a live system for the purpose of room correction?
@@robintanj ah, I see. I don’t have any good advice. All I know how to do is still apply best practices for system. Check every point in the signal chain to maximize GBF. I’m working on an anti-feedback plugin. Would that help? Make sure that you know where your alignment positions are. You don’t want to be accidentally summing main and sub into an open mic onstage for example. Otherwise it’s just EQ.
Can you explain more clearly how to parallel compress drums without get comb filtering? That would be using two stereo subgroups of drums with compressors inserted in each subgroups correct? How do you get around the off set destruction?
Hey FreeKeenan, you need to make sure that both signal paths have the exact same latency. This is usually pretty easy if you just send to both groups and use the exact same processing. You could have the compressor in, just to be sure, but turn the threshold all the way up so it's not doing anything. Send pink noise through, add the second group, see if you hear any comb filtering. The more advanced way to do this is to actually measure the latency of each channel. Robert Scovill has several videos about this.
Hi Nathan, Thanks a lot for the Video. Everything is understood. BUT one question. Everybody speaks about the phase shifting caused by the GEQ or PEQ. But in a practical situation why it matters? We do not send any paralel signals of a channel. like one is with and ather one is without EQ? So phase shifting matters where there are identical 2 signals with different timing. Could you please point out a few practical ( Especially Live Sound Situation) situation where the phase shifting through the EQ matters? Thanks in Advance.
Hi Nathan, Great info! Thanks. So, how would you route this on, say a Digico? Or any other console where you couldn't take the measurement-signal "pre-everything" on a matrix output? I'm kind of stuck with this. Right now I make an extra matrix to be the measurement, and gang it with my actual LR matrix. But that's probably not the right way to go.
Hey Marc, I did make that step a little complicated. Let's assume that you are mixing everything to a group, then sending that to the Matrix where you are doing some output processing for the different speakers. When you route that group to the Matrix, also route it to a physical output, therefore avoiding the processing you might do on the matrix. Does that work?
Hi Nathan, great video. I have always wondered what actually caused the phase shift when boosting or cutting a frequency with an EQ and why it does sort of a 'z' shape(in the phase response) across the center freq. Technically what is causing that? Its same for analog EQs and digital EQs unless it is Linear phase EQ. How does it effect what we are hearing in the end? Thanks in advance.
Hi Yamin, thanks for checking out the video. I may be wrong, but my understanding of the way an EQ works, is that it takes two copies of the signal, adds phase shift to one of them, then adds them back together. Hopefully, in the end, it has a balanced effect on what we hear. Why? Because if there is an EQ change cause by one device, accompanied by a relative phase shift, and we correct it with an complimentary EQ and phase shift, it should come out to zero. Make sense?
So with the impulse response having the extra noise from the graphic eq, is it correct to assume that is a representation of the lag experienced by the phase shifted frequency ranges?
what does the impulse responde graph tell us? If it looks weird after applying the Graphic EQ, does this affect the sound? Does impulse response affect the sound? how?
Hey Casa, the impulse response tells us amplitude and time. Compare the IR of a microphone cable (a single peak) to that of a subwoofer (long stretched out). Yes, the GEQ will affect the sound and the IR. In most cases, any change to the magnitude response will come with a change to the phase response, and therefore, the IR. There's no free lunch. :)
Hey Mark, thanks for checking out the video. I would argue that that is a common misconception. Yes, the floor reflection might be removed, but you can compensate for that with ground plane measurements. The temperature might change and adjust your delay times, but you can recalculate those. The humidity might change, but you can compensate with a high shelf filter. All of that aside, the answer is that you keep measuring. After your tuning you take a combined trace. During soundcheck you compare. Once the show starts you compare again. Thoughts?
that would be harmonic distortion of any electronic equipment, a complete sine wave pure is created digitally but the a/d converters at the moment of generating electrical signal resonate with the components creating harmonics, measured THD (total harmonic distortion) lower is better, specially on broadcast because harmonics create a ton of problems since those where not intentional those are harmonics but created as a margin of error of the electronics do not miss interpret this as harmonics in music a perfect electronic device should not have them.
Smoothing is implemented so badly in both: Smaart and SysTune. In 1/1 mode you can even see straight lines, while this should be a continuous smooth curve. Something like a (bi)cubic interpolation should be used...
Hey dude, thanks for all of the content you put out. I don't pay attention to all of it, but to the parts I do, they are always super helpful and insightful. Thanks for sharing the knowledge :)
Thanks for all your content. From the setup you did for this video; can I use a 2 channel sound card ?
Thanks.
Yes you can!
Hello Nathan"When staging a performance in an indoor venue, what strategies can we employ to address the variations in tonality resulting from the room’s unique acoustic properties?"
Hey Robin, can you give me an example. Maybe this happened to you recently? Tell me more about what you mean by changes in tonality. From my perspective it seems like the sound system calibration process should account for this.
@@nathanlively Thanks for your reply ....In my experience, indoor environments tend to exhibit a pronounced feedback effect. This is primarily due to the fact that certain frequencies may be amplified, contingent upon the specific acoustical properties of the room...What strategies should we employ to effectively mitigate such a challenge?
To what extent are Finite Impulse Response (FIR) filters practically applicable in a live system for the purpose of room correction?
@@robintanj ah, I see. I don’t have any good advice. All I know how to do is still apply best practices for system. Check every point in the signal chain to maximize GBF. I’m working on an anti-feedback plugin. Would that help?
Make sure that you know where your alignment positions are. You don’t want to be accidentally summing main and sub into an open mic onstage for example.
Otherwise it’s just EQ.
Can you explain more clearly how to parallel compress drums without get comb filtering?
That would be using two stereo subgroups of drums with compressors inserted in each subgroups correct? How do you get around the off set destruction?
Hey FreeKeenan, you need to make sure that both signal paths have the exact same latency. This is usually pretty easy if you just send to both groups and use the exact same processing. You could have the compressor in, just to be sure, but turn the threshold all the way up so it's not doing anything. Send pink noise through, add the second group, see if you hear any comb filtering. The more advanced way to do this is to actually measure the latency of each channel. Robert Scovill has several videos about this.
Hi Nathan, Thanks a lot for the Video. Everything is understood. BUT one question. Everybody speaks about the phase shifting caused by the GEQ or PEQ. But in a practical situation why it matters? We do not send any paralel signals of a channel. like one is with and ather one is without EQ? So phase shifting matters where there are identical 2 signals with different timing. Could you please point out a few practical ( Especially Live Sound Situation) situation where the phase shifting through the EQ matters? Thanks in Advance.
Hey Priyan, please see if this helps: ua-cam.com/video/Z-lEyq4sb_k/v-deo.html
Ken "Pooch"Van Druten told me once to use EQ very sparingly. sort of how a surgeon would use a scalpel
thanks for the content. How would you rout this on a ql5?
Hey Phil, you could insert a graphic EQ on an input channel.
Love it! Finally , question did you do a video on what’s the best method
Hey Kidcupid, what's the best method to do what?
Da best explanation...thanks nathan
Hi Nathan,
Great info! Thanks.
So, how would you route this on, say a Digico? Or any other console where you couldn't take the measurement-signal "pre-everything" on a matrix output? I'm kind of stuck with this.
Right now I make an extra matrix to be the measurement, and gang it with my actual LR matrix. But that's probably not the right way to go.
Hey Marc, I did make that step a little complicated. Let's assume that you are mixing everything to a group, then sending that to the Matrix where you are doing some output processing for the different speakers. When you route that group to the Matrix, also route it to a physical output, therefore avoiding the processing you might do on the matrix. Does that work?
5:55 HARMONICS!
Hi Nathan, great video. I have always wondered what actually caused the phase shift when boosting or cutting a frequency with an EQ and why it does sort of a 'z' shape(in the phase response) across the center freq. Technically what is causing that? Its same for analog EQs and digital EQs unless it is Linear phase EQ. How does it effect what we are hearing in the end? Thanks in advance.
Hi Yamin, thanks for checking out the video. I may be wrong, but my understanding of the way an EQ works, is that it takes two copies of the signal, adds phase shift to one of them, then adds them back together.
Hopefully, in the end, it has a balanced effect on what we hear. Why? Because if there is an EQ change cause by one device, accompanied by a relative phase shift, and we correct it with an complimentary EQ and phase shift, it should come out to zero. Make sense?
So with the impulse response having the extra noise from the graphic eq, is it correct to assume that is a representation of the lag experienced by the phase shifted frequency ranges?
Bingo.
Which is also why a subwoofer like a long lazy snake.
Those peaks are Harmonics!
Yes!
mind blown !!
I know, right?!
what does the impulse responde graph tell us?
If it looks weird after applying the Graphic EQ, does this affect the sound? Does impulse response affect the sound? how?
Hey Casa, the impulse response tells us amplitude and time. Compare the IR of a microphone cable (a single peak) to that of a subwoofer (long stretched out).
Yes, the GEQ will affect the sound and the IR. In most cases, any change to the magnitude response will come with a change to the phase response, and therefore, the IR. There's no free lunch. :)
@@nathanlively I understand. Thank you!
Noob question...How do you compensate for filling the room full of people? The eq curve totally changes.
Hey Mark, thanks for checking out the video. I would argue that that is a common misconception. Yes, the floor reflection might be removed, but you can compensate for that with ground plane measurements. The temperature might change and adjust your delay times, but you can recalculate those. The humidity might change, but you can compensate with a high shelf filter.
All of that aside, the answer is that you keep measuring. After your tuning you take a combined trace. During soundcheck you compare. Once the show starts you compare again.
Thoughts?
that would be harmonic distortion of any electronic equipment, a complete sine wave pure is created digitally but the a/d converters at the moment of generating electrical signal resonate with the components creating harmonics, measured THD (total harmonic distortion) lower is better, specially on broadcast because harmonics create a ton of problems since those where not intentional those are harmonics but created as a margin of error of the electronics do not miss interpret this as harmonics in music a perfect electronic device should not have them.
Smoothing is implemented so badly in both: Smaart and SysTune. In 1/1 mode you can even see straight lines, while this should be a continuous smooth curve. Something like a (bi)cubic interpolation should be used...
Hmmm, I had no idea. Good to know. Is an another audio analyzer that you prefer that you think has handled this better?
@@nathanlively I'm not a "pro" in sound measuring, but as a programmer I see the issue from the first look.
do you EQ subwoofer too?
what is delay tracking for?
Hey Casa, this is a big question, so I am going to direct you to the Smaart user manual under Chapter 6, Delay Compensation.
@@nathanlively Thank you very much!
harmonics?
🏆
Overtones.