Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

IdiddidI

macrumors member
Original poster
Jan 22, 2008
51
1
I've seen a few posts talking about how TosLink could theoretically pass through Dolby-encoded streams to a receiver capable of decoding them. I ran out of optical-in ports on my receiver, so I'm just using ATV's analog audio out. Am I getting just 2-channel audio when in fact I could be getting an enhanced signal? Will Take 2 require an optical connection for 5.1?
 
I've seen a few posts talking about how TosLink could theoretically pass through Dolby-encoded streams to a receiver capable of decoding them. I ran out of optical-in ports on my receiver, so I'm just using ATV's analog audio out. Am I getting just 2-channel audio when in fact I could be getting an enhanced signal? Will Take 2 require an optical connection for 5.1?

Yes, 5.1 is not supported through analog stereo connectors.

Kevin
 
Yes, 5.1 is not supported through analog stereo connectors.

Kevin

I should qualify your response above:

Dolby Surround analog can support up to 5.1 channels through analog stereo connections.

Dolby Digital 5.1 channel Surround requires either an optical TOSLINK or coaxial S/PDIF connection.

Most, though not all, Dolby Digital soundtracks, carry phase shifted surround in the front L-R channels as a backward compatibility measure for those who do not have a Dolby Digital receiver or who have a Dolby Digital receiver but lack the means to connect fiber to it to transmit the AC-3 bitstream to the receiver's AC-3 decoder.

Translation: If you ran out of ports and your priority is to use another device in the optical connector, then switch your receiver's mode for the AppleTV input to Dolby ProLogic II. This will decode the phase-shifted surround channels that are multiplexed into the left and right channels, assuming they are present.

Again, I say most, but not all, because it depends on whether or not the mastering engineers included it. It is not a requirement of Dolby Labs Licensing guidelines that they include analog quadrature in order to obtain permission to use Dolby's logos and trademarks on their packaging.
 
Most, though not all, Dolby Digital soundtracks, carry phase shifted surround in the front L-R channels as a backward compatibility measure for those who do not have a Dolby Digital receiver or who have a Dolby Digital receiver but lack the means to connect fiber to it to transmit the AC-3 bitstream to the receiver's AC-3 decoder.

Really? Would this be true on DVDs that include a 2.0 stereo mix as well? I'd be surprised... wouldn't this effectively increase the load on the encoder and waste bitrate in the surround stream by creating unnecessarily complicated L and R channels? I always assumed that DVDs which had a 5.1 mix only were being downmixed to a Pro Logic-compatible stream by internal circuitry in the DVD player when the player was hooked up via a traditional stereo connection.
 
But for all intents and purposes (i.e., the ATV), Kevin is correct.

Not quite. He is correct insofar as Dolby Digital 5.1-channel surround is concerned. It can only be transmitted either through TOSLink, S/PDIF, or six-channel RCA (requires a Dolby Digital decoder in the DVD player itself).

I think I should clarify, also, that Dolby Digital is a multichannel format that is not restricted to 5.1 channels. It can be 1.0, 2.0, 2.1, 3.1, 4.0, 5.1 or 7.1 (EX subformat). Technically Dolby Digital can be decoded and sent to a receiver over stereo RCA... supporting either a Dolby Digital 2.0 mix, or a stereo downmix of Dolby Digital 5.1 channel surround. Of course at that point it's no longer Dolby Digital.

To clarify my correction of Kevin's statement, it's the Dolby Digital bitstream format, not the number of channels, that can't be transmitted over stereo RCA.

But the AppleTV will pass through Dolby Surround analog that can be decoded into 5.1 channels of information, as it is. There is no additional software or hardware needed because the encoding is part of the analog signal that has been digitized. Upon reconstruction of the analog signal (whether it takes place in the ATV and is sent stereo to the receiver, or is sent digital to the receiver and decoded there) if Dolby Surround was present in the DVD master from which the ATV movie was duplicated, it will be present in the analog L-R signal decoded by the AppleTV or the receiver.

Try it. Download Isao Tomita's "Planets" or "Snowflakes are Dancing" from iTunes Music Store and send them to your receiver from your AppleTV's analog or optical output... and set your receiver to Dolby ProLogic II mode (DPLII Music if you have that subset). This is not a "phony" phase shift produced by the Dolby ProLogic decoder. Tomita's recordings were mastered in Dolby Surround and you will very distinctively hear separation as good as the original analog Dolby Surround master.

Once a Dolby Surround analog quadrature is in a stereo analog, PCM, AC-3 or other file, it's there to stay unless an engineer goes to the trouble of either remastering the original multitrack recording to stereo, or playing back the Dolby Surround 2-track master with a decoder, then dumping the quadrature and resampling the separated L-R, but this is so cumbersome it's rarely done for something like iTunes Music Store.

Really? Would this be true on DVDs that include a 2.0 stereo mix as well?

Depends...

Apple's documentation isn't that good on this stuff, so you will have to refer to IMDB or a DVD review site to find out the specifics of the DVD release from which the H.264 was encoded. If all it says there is Dolby Digital 2.0, then it's likely only stereo.

However, the designation "Dolby Digital 2.0 Surround" is used to indicate a recording that is 2 channel stereo with Dolby Surround quadrature in the L-R, but encoded as 2-channel Dolby Digital AC-3 instead of 5.1-channel (referred to as 3/2.1 in the technical documentation). That is the designation as specified by Dolby Laboratories, and with rare exception this is what you should find if Dolby Surround analog is present in the stereo AC-3 bitstream.

I'd be surprised... wouldn't this effectively increase the load on the encoder and waste bitrate in the surround stream by creating unnecessarily complicated L and R channels?

No. You have to think of it this way... If a constant sampling frequency and bit depth is used, such as in 16-bit Linear PCM for Red Book CD Digital Audio, then it doesn't matter how much sound there is... It's still going to be the same bitstream as the same format with a 1kHz sinewave.

What I think you're misconstruing here is the difference between the carrier frequency and the frequency and amplitude of the audio it can carry. Let's use Linear PCM as a really basic example:

The digital carrier has a frequency of 44.1 kHz, and a quantization interval size of 16 bits per channel, and 2 channels. This equates to 1411 Kbps. This bitstream can reproduce any audio reliably up to 22050 Hz (the Nyquist limit), with a dynamic range of around 96dB, meaning that it will support all frequencies within the human range of hearing ("A-weighted" range) and a range of amplitude values spanning 96 decibels from the softest to the loudest sounds. This bitstream does not get larger if you cram more sound into it.

What happens in Dolby Surround/ProLogic/ProLogic II is we're not adding more digital data. We're adding more analog frequencies... but these frequencies are shifted 90 degrees out of phase with the main audio, so as to be imperceptible in regular stereo playback.

Here is an example of a 90 degree phase shift:

terminologIl01.gif


In a quadrature they form a more complex soundwave but white noise can be said to be complex and it doesn't change the dynamics of the LPCM format. It's still 1411 Kbps to reconstruct that white noise. It's still 1411 Kbps to reconstruct a complex analog waveform that is later demultiplexed by a ProLogic decoder that looks at the complex waveform and mathematically separates the portion of the signal that's 90 degrees out of phase.

The same principle is used in fiber optic transmission on a massive scale in technologies like Dense Wavelength Division Multiplexing, where multiple frequencies (colors) of light are combined as separate carrier frequencies into one complex carrier frequency that consists of 80+ channels (wavelengths) of data at the same time over a single fiber. This led to breakthroughs by Bell Labs and others to break 400 gigabits per second over 400 kilometers along a single strand of fiber! In 1997, Bell Labs set a record transmitting 3.2 Terabits per second over a 7 strand fiber optic bundle... equivalent then to the per second traffic of the entire internet.


I always assumed that DVDs which had a 5.1 mix only were being downmixed to a Pro Logic-compatible stream by internal circuitry in the DVD player when the player was hooked up via a traditional stereo connection.

It is my understanding from Dolby's technical papers that this is not the case

The downmix feature that is required of all licensed Dolby Digital decoders downmixes the center and rear channels at -3dB into a stereo signal... sans surround quadrature.

This can occur at the DVD player if the DVD player is equipped with a Dolby Digital decoder internally (most today are), or at the receiver if running optical fiber from the DVD to the Dolby Digital decoder in the receiver. But a quadrature is not generated on the fly.

That would require some impressive hardware in a DVD player, AND it would have to be a licensed Dolby Surround encoder (which are expensive as hell relative to even premium DVD player prices today).

If you have a Dolby Digital processor in your receiver, then there's no need to downmix to 2 channels unless you only have two speakers in which case a Dolby Surround quadrature will do you no good.

What really happens is that the Dolby Surround analog signal, if there is one, is present in the L-R channels of the AC-3 bitstream. Either the audio goes to a stereo receiver as a stereo downmix with Center, SurL and SurR downmixed -3dB into the L-R. Or the audio goes to a ProLogic decoder which separates the phase shifted portion of the signal and sends it to the rear, or, in the case of ProLogic II, sends it to the rear, center and LFE using a combination of phase shifting and bandpass filtering.
 
Not quite. He is correct insofar as Dolby Digital 5.1-channel surround is concerned. It can only be transmitted either through TOSLink, S/PDIF, or six-channel RCA (requires a Dolby Digital decoder in the DVD player itself).

Or through the HDMI cable.

But the AppleTV will pass through Dolby Surround analog that can be decoded into 5.1 channels of information, as it is. There is no additional software or hardware needed because the encoding is part of the analog signal that has been digitized.

This is where "all intents and purposes" comes in. I know of no Mac software package that can take a DD or DTS 5.1 discrete channel from a DVD (or any other source, for that matter) and put it into Dolby Surround for 2-channel matrix of a 5.1 signal. The best that Handbrake can do is 5-channel DPLII. Until Apple fixes Quicktime (which is what the ATV uses for playback) we will not get anything better than that. Hopefully, the ATV2 software will fix this problem for us (i.e., not restricted solely to HD to ATV2 downloads from Apple).
 
No. You have to think of it this way... If a constant sampling frequency and bit depth is used, such as in 16-bit Linear PCM for Red Book CD Digital Audio, then it doesn't matter how much sound there is... It's still going to be the same bitstream as the same format with a 1kHz sinewave.

What I think you're misconstruing here is the idea of an analog carrier versus a digital carrier. Let's use Linear PCM as a really basic example:

The digital carrier has a frequency of 44.1 kHz, and a quantization interval size of 16 bits per channel, and 2 channels. This equates to 1411 Kbps.

What happens in Dolby Surround is we're not adding more digital data. We're adding more analog frequencies... but these frequencies are shifted 90 degrees out of phase with the main audio, so as to be imperceptible in regular stereo playback. In a quadrature they form a more complex soundwave but white noise can be said to be complex and it doesn't change the dynamics of the LPCM format. It's still 1411 Kbps to reconstruct that white noise. It's still 1411 Kbps to reconstruct a complex analog waveform that is later demultiplexed by a ProLogic decoder that looks at the complex waveform and mathematically separates the portion of the signal that's 90 degrees out of phase.

Which would be a salient point if we were talking about a 1411kbps LPCM stream on every channel (well, 1536kbps actually, since movie audio is stored at the more video-friendly 48KHz). But AC3 is a lossy compression codec, like MP3 (though obviously more advanced). So we have a total of 448kbps (sometimes 640kbps) spread out over 6 discrete channels. Wouldn't the added complexity of additional sound cause problems for the lossy AC3 encoder?

EDIT: Just realized I might be coming across as a bit petulant—not the intent, I assure you. I'm just surprised to find out this is the case, and going off what (admittedly little) I know about lossy digital audio, it seems weird. Then again, I've never "noticed" it before (and I've been feeding Dolby Digital to an external decoder for ages now), so I guess this is more of an academic discussion than a "holy crap, my real world audio sounds less ideal than it could" discussion. ;)
 
Or through the HDMI cable.

Right.

This is where "all intents and purposes" comes in. I know of no Mac software package that can take a DD or DTS 5.1 discrete channel from a DVD (or any other source, for that matter) and put it into Dolby Surround for 2-channel matrix of a 5.1 signal.

Not needed. As I stated before, the Dolby Surround phase shifted audio is ALREADY present in the left and right channels of most Dolby Digital AC-3 soundtracks and most if not all stereo soundtracks. There's absolutely nothing required of the AppleTV other than feeding this audio, as-is, to the receiver... which will, if equipped with a Dolby Digital or Dolby ProLogic decoder, demux the analog quadrature.
 
holy... thank you all for taking the time to respond to my question. I've bookmarked this post--very informative :)
 
Not needed. As I stated before, the Dolby Surround phase shifted audio is ALREADY present in the left and right channels of most Dolby Digital AC-3 soundtracks and most if not all stereo soundtracks. There's absolutely nothing required of the AppleTV other than feeding this audio, as-is, to the receiver... which will, if equipped with a Dolby Digital or Dolby ProLogic decoder, demux the analog quadrature.

Are you suggesting, then, that if we use Handbrake to convert a DD 5.1 DVD to an MP4 container that we should use the Dolby Surround option (instead of DPLII) to get 5.1 playback on a Dolby Digital or DPLII receiver?
 
Which would be a salient point if we were talking about a 1411kbps LPCM stream on every channel (well, 1536kbps actually, since movie audio is stored at the more video-friendly 48KHz). But AC3 is a lossy compression codec, like MP3 (though obviously more advanced). So we have a total of 448kbps (sometimes 640kbps) spread out over 6 discrete channels. Wouldn't the added complexity of additional sound cause problems for the lossy AC3 encoder?

No. No more than adding a fourth instrument to a trio would cause problems for a sound recording in any digital format. Does this analogy help clear up what's happening here?

Except add one more thing... the audio is phase shifted. It's not more data, it is an alteration to an existing soundwave's appearance... one which if played back through a stereo does not produce any audible difference (Because of the phase shift)... but which makes it easy to mathematically separate, vis-a-vis Fourier transform, the complex soundwave into multiple soundwaves.

The other thing you have to understand about even "lossy" codecs such as AAC or its uncle AC-3, is that an analog soundwave has to be reconstructed from digital data. Most of the artifaction that occurs is imperceptible. ALL of the artifaction that occurs occurs during reconstruction, not encoding. It is poor reconstruction of the signal that results in noticeable artifaction... but note: When we talk about embedding an analog quadrature in a stereo PCM, stereo AAC, stereo AC-3, stereo ADPCM, etc. etc. we are NOT talking about creating a resulting soundwave that exceeds the dynamic range or frequency response of any of these formats.

Add a trumpet to a drumbeat and you get one complicated soundwave. Now imagine you throw a second trumpet 90 degrees out of phase with the first. You won't notice the second trumpet. If you shift the second trumpet 180 degrees out of phase with the first, it'll completely cancel out both waves. None of these activities actually require more bandwidth than is there, nor do they produce a soundwave of such finite detail that it cannot be reconstructed from any of the aforementioned formats.

What you're saying is as if it will change a MiniDV signal from its 3.5Mbps compressed bitstream if you video two apples and an orange instead of one banana. It doesn't. It makes no difference to the digital format what the complicated sound (or image) is comprised of). Every pixel of a frame, or every quantization interval of sampled audio, compressed or not, has a certain frequency response and amplitude range... and none of what I've described here confounds that.

Does that make sense?
 
Except add one more thing... the audio is phase shifted. It's not more data, it is an alteration to an existing soundwave's appearance... one which if played back through a stereo does not produce any audible difference (Because of the phase shift)... but which makes it easy to mathematically separate, vis-a-vis Fourier transform, the complex soundwave into multiple soundwaves.

Avatar, I greatly value your expertise in this area! I am curious, though. lossy compression algorithms use psychoacousitics to determin what the human ear can hear and what it can't. Since this phase shift for DPLII isn't audible, why does it not get taken out during compression of the L and R AC/3 stream?

-steve
 
No. No more than adding a fourth instrument to a trio would cause problems for a sound recording in any digital format. Does this analogy help clear up what's happening here?

Not really. A complex rock or dance record full of multi-layered, reverbed parts and many complex embellishments and flourishes (distortion, lots of cymbals, anything that creates a lot of different types of sound) is degraded more audibly by lossy compression codecs like MP3 than, say, a guy and a guitar. More complex soundwaves by default must have more information stripped out when being compressed in order to meet a given bitrate target.

Or, to use a closer example, look at the bitrate outputs using something like LAME's "V2" VBR mode (which targets a given "quality of sound," not a specific bitrate) when you encode something like a BT dance track (very complex electronic music) vs, say, a solo piano track from a Final Fantasy Piano Collections album. Because the BT track is much more complicated sonically, the encoder will have to throw out less information (and thus produce a higher bitrate file) to meet the same level of relative sonic quality when compared to the original LPCM from the CD vs. the solo piano track (which will still sound very close to the CD, if not transparent, at a much lower bitrate than the BT track).

Is this understanding of lossy encoding incorrect?
 
Are you suggesting, then, that if we use Handbrake to convert a DD 5.1 DVD to an MP4 container that we should use the Dolby Surround option (instead of DPLII) to get 5.1 playback on a Dolby Digital or DPLII receiver?

No. Suffice it to say this is a good question but there are various possible answers until we know exactly how the AppleTV 2.0 will handle AC-3.

What I'm saying is that the AC-3 track itself usually carries Dolby Surround in the front left and right channels. If the AppleTV allows output from the stereo RCA of audio when playing back their HD with AC-3 passthrough or however it's going to work, then this means the AppleTV can pass on to your Dolby ProLogic receiver the only two channels necessary to play back the full Dolby Surround analog mix that is present in most Dolby Digital AC-3 bitstreams.

So it's POSSIBLE (but we wont know for sure until we see what method AppleTV employs of AC-3 passthrough) that you can just use the AC-3 track in Handbrake.

However, if the AppleTV does NOT have a Dolby Digital decoder itself, it is possible that no sound will output from the Stere RCA when playing back AppleTV HD with 5.1 passthrough... the AC-3 would only traverse the optical output in such a scenario. In that case, you will have to encode your movies in Dolby ProLogic or ProLogic II output mode. I use Dolby Surround analog to refer to the original theatrical surround format, but Dolby ProLogic and ProLogic II are also analog phase shifted formats. All three will work, but Dolby Surround only contains Left, Right, and Mono Surround.

According to this documentation on Handbrake, there are two ways an analog surround track are produced... One is by downmix. Note that Handbrake is not a licensed Dolby Surround or Dolby ProLogic encoder, and therefore it is not the best choice in terms of fidelity because it will not adhere to Dolby Laboratories Licensing criteria.

One thing I found that I suspected is that Handbrake will detect if there is a Dolby Surround soundtrack present, and if so, it will preserve it. This might be a good option if you have a Dolby ProLogic receiver, aren't getting Dolby Digital, and aren't sure you want to wait until you know how exactly Dolby Digital pass-through will work on AppleTV.

According to the Handbrake docs, Dolby ProLogic encoding is the default behavior the Handbrake software. So there you'd need to do nothing.

Now, if it turns out that AC-3 passthrough works on AppleTV, there's the problem of whether or not AppleTV actually has a Dolby Digital decoder onboard to perform a stereo downmix. If it doesn't, you're best off not using the AC-3 pass through encoding mode if you do not have an optical input on your receiver.

However, if we find in Take 2 that there is output from the stereo RCA on the AppleTV when playing back AC-3 passthrough, then you may be able to encode AC-3 passthrough in Handbrake for those movies that already have Dolby Surround embedded in the front L-R channels of the 5.1 mix. This way the file possesses the stereo matrix phase shifted surround in the front L-R of the Dolby Digital track and can pass those two channels on to your receiver AND be forward compatible if and when you choose to get a Dolby Digital receiver.

Does that help clear things up a bit?
 
Not really. A complex rock or dance record full of multi-layered, reverbed parts and many complex embellishments and flourishes (distortion, lots of cymbals, anything that creates a lot of different types of sound) is degraded more audibly by lossy compression codecs like MP3 than, say, a guy and a guitar. More complex soundwaves by default must have more information stripped out when being compressed in order to meet a given bitrate target.

Or, to use a closer example, look at the bitrate outputs using something like LAME's "V2" VBR mode (which targets a given "quality of sound," not a specific bitrate) when you encode something like a BT dance track (very complex electronic music) vs, say, a solo piano track from a Final Fantasy Piano Collections album. Because the BT track is much more complicated sonically, the encoder will have to throw out less information (and thus produce a higher bitrate file) to meet the same level of relative sonic quality when compared to the original LPCM from the CD vs. the solo piano track (which will still sound very close to the CD, if not transparent, at a much lower bitrate than the BT track).

Is this understanding of lossy encoding incorrect?

Yes. Newer encoding schemas greatly mitigate the amount of data required to effectively reconstruct the original analog signature with fundamentally indiscernible artifaction at bitrates one tenth that of PCM. No scientific double blinded tests have shown otherwise.. The ABX tests at Hydrogen audio hardly qualify.

AAC and AC-3 are perceptual coding schemas that use pure Modified Discrete Cosine Transform. MP3 instead applies MDCT to a 32-band Polyphase Quadrature Filter which, incidentally, induces frequency aliasing.

Supposedly MP3 uses sub bands to mask this but it's still noticeably inferior to AAC and its uncle AC-3, at the same fixed bitrates.

There are additional perceptual coding techniques that AAC and AC-3 use that MP3's do not. These include truncation of redundant data and elimination of data not necessary to the signature.

The first, truncation, has many examples.

My favorite example is the method used in ADPCM which is a lossless format to which DTS is very similar.

Adaptive Delta Pulse Coded Modulation works by throttling the quantization interval sample sizes to match the amplitude values, rather than using the same quantization interval sample size no matter what the amplitude value.

Also, relative amplitude values instead of absolute are used.

For example...

If in one interval the amplitude is -27dBFS then jumps to -26dBFS in the next:

The absolute values will require 5 bits per sample, 10 bits total between them.

The relative CHANGE, a difference of 1 dBFS, only requires 1 bit.

We just cut down that sample's size by 90% without any loss.


The second method, elimination of data unnecessary to the faithful reconstruction includes things like bandpass filters.

There's an RF intermodulation filter, a DC notch filter, and a 20kHz anti-alias lowpass filter. Of note, the antialias filter does a couple things...

First, it eliminates frequencies above the Nyquist limit which would otherwise induce aliased frequencies upon reconstruction of the analog signal. Second, it eliminates a lot of unnecessary data. Neither of these functions of the anti-alias filter degrade anything that you or any other human being can possibly perceive... I don't care what the morons at Stereophile tell you.

Lastly, 16-bit Linear PCM itself is no hot puppy. It still supports "only" 65,536 possible amplitude values at any quantization interval. Contrast that with 24-bit LPCM which supports 2.78 MILLION possible amplitude values per sample. This translates to a difference of 96.7dB dynamic range for 16-bit versus ~140dB dynamic range for 24-bit!

The difference is staggering, even if we assume a 48kHz sampling frequency for both (which is again sufficient because it follows the rule of 2x Nyquist limit.). 24-bit LPCM has a resolution so fine it doesn't require dithering, which 16-bit LPCM does.

The difference between AAC/AC-3 versus 16-bit LPCM in terms of frequency response and dynamic range is tremendously narrower than the gaping difference between 16-bit Linear PCM and 24-bit Linear PCM.

Even so, the AES deems 128 Kbps AAC-LC acoustically transparent to 16-bit LPCM not to mention that I've never met anyone other than an audio engineer who could even distinguish 16-bit LPCM from 24-bit LPCM. The real benefit can't even be realized by most speakers... and by most speakers I include the $65,000 Wilson X1 which has a dynamic range of 102dB... just a decibel shy of the dynamic range of AC-3!
 
So if I have any movies that I really want to hear in 5.1, should I just hang off on encoding them until Take 2 arrives?
 
So if I have any movies that I really want to hear in 5.1, should I just hang off on encoding them until Take 2 arrives?

Yes. Until we know how the ATV2 handles Dolby Digital you'd be ill-advised to start ripping. If iTunes can sync DD files to the ATV, look for a revision to Handbrake to include such provision (AC-3 passthrough). But, if Apple is obstinate, they just might make it difficult to do. It is conceivable (given Apple's recent history) that only downloaded HD/DD movies will be allowed to play DD 5.1 on the ATV2.
 
Yes. Until we know how the ATV2 handles Dolby Digital you'd be ill-advised to start ripping. If iTunes can sync DD files to the ATV, look for a revision to Handbrake to include such provision (AC-3 passthrough). But, if Apple is obstinate, they just might make it difficult to do. It is conceivable (given Apple's recent history) that only downloaded HD/DD movies will be allowed to play DD 5.1 on the ATV2.

Thanks for the advice. I'll just start ripping some of my comedy DVDs then where I'm not too bothered about DD.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.