View Poll Results: Am I wasting my valuable time?
Voters: 31. You may not vote on this poll
Ask Me Anything About Audio System Design
#201
Good luck...
#202
Hi
Just acquired a Hertz MLK 3 set. Tweeters in sail panels and woofers in doors. The ribbon midrange is recommended to sit horizontally on the dash facing the windscreen.
Your thoughts on the midrange placement and direction?
Thanks
Just acquired a Hertz MLK 3 set. Tweeters in sail panels and woofers in doors. The ribbon midrange is recommended to sit horizontally on the dash facing the windscreen.
Your thoughts on the midrange placement and direction?
Thanks
#203
"...move your tweeters -move your sound stage; ... "
Soundstage: The ability for our ears\brain to construct a virtual picture of where the different sounds emanate from (instruments, voices).
How do we hear?
In the midrange frequency (<1500hz) we hear differences in time arrival (known as inter-aural time differences ITD). The brain can calculate location based on the differences in arrival times between 2 ears. Move your midrange, move the apparent location of where the sound is emanating.
At higher frequencies (>3000hz), the ears\brain can no longer use time arrivals to locate where a sound is coming from. Instead it compares intensity or loudness (known as inter-aural intensity\level differences IID or ILD). Change the loudness, move the apparent location of where the sound is emanating.
So how can moving a tweeter --a frequency where we cannot tell differences based on distance but only loudness -- move the soundstage?
If one wants to confirm that the "meat and potatoes" of the sound stage sort of speak doesn't come from the tweets, listen to the system with only the tweets and then only with the midranges. See for yourself how well the stage does.
It's no secret, we suck at localization at higher frequencies. I am sure a lot of us have had to go searching for something making a high frequency sound and simply not being able to find it no matter how much we move our head and look around hehe. In constructing the virtual picture, our brain will count more on the midrange area then it will the higher frequencies. I am not saying there is no benefit in optimizing the tweets but their location has little to do with the stage.
So some people will say that moving their tweets have some affect on the stage. Well I'll tell you, it's not because of their location in relationship to you ears.. but rather, what's near them causing reflections. So it comes down to the environment being the numero uno obstacle to perfect sound reproduction. Not DAC's, not THD ect ect ok I'm am getting off topic.
#204
Lets start with this.
Soundstage: The ability for our ears\brain to construct a virtual picture of where the different sounds emanate from (instruments, voices).
How do we hear?
In the midrange frequency (<1500hz) we hear differences in time arrival (known as inter-aural time differences ITD). The brain can calculate location based on the differences in arrival times between 2 ears. Move your midrange, move the apparent location of where the sound is emanating.
At higher frequencies (>3000hz), the ears\brain can no longer use time arrivals to locate where a sound is coming from. Instead it compares intensity or loudness (known as inter-aural intensity\level differences IID or ILD). Change the loudness, move the apparent location of where the sound is emanating.
So how can moving a tweeter --a frequency where we cannot tell differences based on distance but only loudness -- move the soundstage?
If one wants to confirm that the "meat and potatoes" of the sound stage sort of speak doesn't come from the tweets, listen to the system with only the tweets and then only with the midranges. See for yourself how well the stage does.
It's no secret, we suck at localization at higher frequencies. I am sure a lot of us have had to go searching for something making a high frequency sound and simply not being able to find it no matter how much we move our head and look around hehe. In constructing the virtual picture, our brain will count more on the midrange area then it will the higher frequencies. I am not saying there is no benefit in optimizing the tweets but their location has little to do with the stage.
So some people will say that moving their tweets have some affect on the stage. Well I'll tell you, it's not because of their location in relationship to you ears.. but rather, what's near them causing reflections. So it comes down to the environment being the numero uno obstacle to perfect sound reproduction. Not DAC's, not THD ect ect ok I'm am getting off topic.
Soundstage: The ability for our ears\brain to construct a virtual picture of where the different sounds emanate from (instruments, voices).
How do we hear?
In the midrange frequency (<1500hz) we hear differences in time arrival (known as inter-aural time differences ITD). The brain can calculate location based on the differences in arrival times between 2 ears. Move your midrange, move the apparent location of where the sound is emanating.
At higher frequencies (>3000hz), the ears\brain can no longer use time arrivals to locate where a sound is coming from. Instead it compares intensity or loudness (known as inter-aural intensity\level differences IID or ILD). Change the loudness, move the apparent location of where the sound is emanating.
So how can moving a tweeter --a frequency where we cannot tell differences based on distance but only loudness -- move the soundstage?
If one wants to confirm that the "meat and potatoes" of the sound stage sort of speak doesn't come from the tweets, listen to the system with only the tweets and then only with the midranges. See for yourself how well the stage does.
It's no secret, we suck at localization at higher frequencies. I am sure a lot of us have had to go searching for something making a high frequency sound and simply not being able to find it no matter how much we move our head and look around hehe. In constructing the virtual picture, our brain will count more on the midrange area then it will the higher frequencies. I am not saying there is no benefit in optimizing the tweets but their location has little to do with the stage.
So some people will say that moving their tweets have some affect on the stage. Well I'll tell you, it's not because of their location in relationship to you ears.. but rather, what's near them causing reflections. So it comes down to the environment being the numero uno obstacle to perfect sound reproduction. Not DAC's, not THD ect ect ok I'm am getting off topic.
#205
Lets start with this.
Soundstage: The ability for our ears\brain to construct a virtual picture of where the different sounds emanate from (instruments, voices).
How do we hear?
In the midrange frequency (<1500hz) we hear differences in time arrival (known as inter-aural time differences ITD). The brain can calculate location based on the differences in arrival times between 2 ears. Move your midrange, move the apparent location of where the sound is emanating.
At higher frequencies (>3000hz), the ears\brain can no longer use time arrivals to locate where a sound is coming from. Instead it compares intensity or loudness (known as inter-aural intensity\level differences IID or ILD). Change the loudness, move the apparent location of where the sound is emanating.
So how can moving a tweeter --a frequency where we cannot tell differences based on distance but only loudness -- move the soundstage?
If one wants to confirm that the "meat and potatoes" of the sound stage sort of speak doesn't come from the tweets, listen to the system with only the tweets and then only with the midranges. See for yourself how well the stage does.
It's no secret, we suck at localization at higher frequencies. I am sure a lot of us have had to go searching for something making a high frequency sound and simply not being able to find it no matter how much we move our head and look around hehe. In constructing the virtual picture, our brain will count more on the midrange area then it will the higher frequencies. I am not saying there is no benefit in optimizing the tweets but their location has little to do with the stage.
So some people will say that moving their tweets have some affect on the stage. Well I'll tell you, it's not because of their location in relationship to you ears.. but rather, what's near them causing reflections. So it comes down to the environment being the numero uno obstacle to perfect sound reproduction. Not DAC's, not THD ect ect ok I'm am getting off topic.
Soundstage: The ability for our ears\brain to construct a virtual picture of where the different sounds emanate from (instruments, voices).
How do we hear?
In the midrange frequency (<1500hz) we hear differences in time arrival (known as inter-aural time differences ITD). The brain can calculate location based on the differences in arrival times between 2 ears. Move your midrange, move the apparent location of where the sound is emanating.
At higher frequencies (>3000hz), the ears\brain can no longer use time arrivals to locate where a sound is coming from. Instead it compares intensity or loudness (known as inter-aural intensity\level differences IID or ILD). Change the loudness, move the apparent location of where the sound is emanating.
So how can moving a tweeter --a frequency where we cannot tell differences based on distance but only loudness -- move the soundstage?
If one wants to confirm that the "meat and potatoes" of the sound stage sort of speak doesn't come from the tweets, listen to the system with only the tweets and then only with the midranges. See for yourself how well the stage does.
It's no secret, we suck at localization at higher frequencies. I am sure a lot of us have had to go searching for something making a high frequency sound and simply not being able to find it no matter how much we move our head and look around hehe. In constructing the virtual picture, our brain will count more on the midrange area then it will the higher frequencies. I am not saying there is no benefit in optimizing the tweets but their location has little to do with the stage.
So some people will say that moving their tweets have some affect on the stage. Well I'll tell you, it's not because of their location in relationship to you ears.. but rather, what's near them causing reflections. So it comes down to the environment being the numero uno obstacle to perfect sound reproduction. Not DAC's, not THD ect ect ok I'm am getting off topic.
I've seen vids on youtube with people aiming there tweeters at the window to reflect the sound.
All in all, I think it really depends on the driver what sounds best to him/her. Some people might not like where you put your speakers, But they don't own the car and they don't drive it...
#206
Well Mr. H, feel free to add more insight... I personally wasn't willing to drop 2000 words contrasting the complete evolution of both...
So I simply told him that Multi was likely going to serve him better...
Please feel free to technically explain why and how to him...
So I simply told him that Multi was likely going to serve him better...
Please feel free to technically explain why and how to him...
Aside from the technical differences, which I doubt you understand, double blind ABX testing has shown time and again there is no audible difference between a single bit and dual Burr-Brown 24 bit processor.. or anything in between..
\fail
Last edited by Haunz; 11-21-2010 at 10:42 PM.
#208
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device that converts a digital (usually binary) code to an analog signal (current, voltage, or electric charge). An analog-to-digital converter (ADC) performs the reverse operation.
click me for more info
click me for more info