I have implemented a modified HRTF (head related transfer functions) based surround panorama simulation with the use of stereo headphones.
Our decoder can be downloaded from:
Some basic facts: What is HRTF filtering?
The basic question is why do we perceive sound direction, while having only
pair of ears (i.e. this seems to be sufficient for stereo, but why does it work for surround sound perception?).
The answer is that we in fact our ears receive only a stereo signal and than the surround perception is generated by our brain. Since our head, shoulders and ears cause some acoustic wave reflections, based on this our brain is able to ‘guess’ what is the source direction.
The idea is to emulate such reflections in digital domain (with some DSP operations), before playing with stereo headphones, so listeners brain should be able to ‘decode’ such stereo source to surround.
And indeed it works great, however only in case of very precise simulation of individual’s
shoulders, head and ears shape. So the generalized solutions tuned for an average listener usually
are pretty disappointing, and the sound is usually perceived as a linearly distorted stereo signal,
rather than surround (we differ much from each other, in terms of body shape )
The algorithm that I’ve implemented affects only phase of the signal, without the linear distortions. This however significantly limits vertical direction perception, and also limits surround speakers direction perception. In order to overcome this limitation, I have added a reverb algorithm, which emulates sound propagation in a 20 square meters room (4m x 5m). This is not a normal reverb effect, because for the left surround channel, the impulse response has been emitted from the surround left speaker position, and recorded in the sweet spot with a microphone directed towards the surround left speaker. Similar procedure has been applied for the surround right (impulse generated in the right speaker position, and room response has been captured with a microphone directed towards the right surround speaker).
Based on the gathered impulse responses, a simplified IIR filter coefficients have been calculated (in order to decrease computational complexity of long FIR convolutive filtering with the use of the room impulse responses).
Front channels (FL, C and FR) have been simulated only with the use of phase adjustments, coming from the HRTF concept.
There is still a lot of room for improvements, but I think the result is very promising.
Please test it and provide some critical comments!
If you have any questions, I’d be happy to answer them.
The algorithm (embedded in our DSfilter) can be applied for any 5.1 source,
So it can be used for AC3 stream for example.
Hare is some test clip (in Aud-X MP3 5.1 format).
:thanks: , 3d
Our decoder can be downloaded from:
Some basic facts: What is HRTF filtering?
The basic question is why do we perceive sound direction, while having only
pair of ears (i.e. this seems to be sufficient for stereo, but why does it work for surround sound perception?).
The answer is that we in fact our ears receive only a stereo signal and than the surround perception is generated by our brain. Since our head, shoulders and ears cause some acoustic wave reflections, based on this our brain is able to ‘guess’ what is the source direction.
The idea is to emulate such reflections in digital domain (with some DSP operations), before playing with stereo headphones, so listeners brain should be able to ‘decode’ such stereo source to surround.
And indeed it works great, however only in case of very precise simulation of individual’s
shoulders, head and ears shape. So the generalized solutions tuned for an average listener usually
are pretty disappointing, and the sound is usually perceived as a linearly distorted stereo signal,
rather than surround (we differ much from each other, in terms of body shape )
The algorithm that I’ve implemented affects only phase of the signal, without the linear distortions. This however significantly limits vertical direction perception, and also limits surround speakers direction perception. In order to overcome this limitation, I have added a reverb algorithm, which emulates sound propagation in a 20 square meters room (4m x 5m). This is not a normal reverb effect, because for the left surround channel, the impulse response has been emitted from the surround left speaker position, and recorded in the sweet spot with a microphone directed towards the surround left speaker. Similar procedure has been applied for the surround right (impulse generated in the right speaker position, and room response has been captured with a microphone directed towards the right surround speaker).
Based on the gathered impulse responses, a simplified IIR filter coefficients have been calculated (in order to decrease computational complexity of long FIR convolutive filtering with the use of the room impulse responses).
Front channels (FL, C and FR) have been simulated only with the use of phase adjustments, coming from the HRTF concept.
There is still a lot of room for improvements, but I think the result is very promising.
Please test it and provide some critical comments!
If you have any questions, I’d be happy to answer them.
The algorithm (embedded in our DSfilter) can be applied for any 5.1 source,
So it can be used for AC3 stream for example.
Hare is some test clip (in Aud-X MP3 5.1 format).
:thanks: , 3d
Comment