Now that Apple has begun to release tracks in DRM-free 256kbps AAC through the iTunes Store, the listening tests are on. MaximumPC gathered 10 people, and had those people select 10 familar tracks, which they then encoded at both 128kbps AAC (which is the current iTunes Store offering), and at 256kbps, which is the new DRM-free bitrate. They then asked their ten subjects (in a double-blind experiment) whether they could tell the difference between the two tracks after repeated listens.
But they also threw a twist into the mix, asking subjects to listen first with a pair of the default Apple earbuds, then with a pair of $400 Shure SE420 phones. Their theory – that more people would be able to tell the difference between the bitrates with the higher-quality earphones – didn’t quite pan out.
The biggest surprise of the test actually disproved our hypothesis: Eight of the 10 participants expressed a preference for the higher-bit rate songs while listening with the Apple buds, compared to only six who picked the higher-quality track while listening to the Shure’s. Several of the test subjects went so far as to tell they felt more confident expressing a preference while listening to the Apple buds. We theorize that the Apple buds were less capable of reproducing high frequencies and that this weakness amplified the listeners’ perception of aliasing in the compressed audio signal. But that’s just a theory.
Also interesting is that the older subjects (whose hearing is supposedly less acute) did a better job of telling the tracks apart consistently than did the younger participants. Could it be that the younger generation has grown up on compressed music and doesn’t know what to listen for? Or it could be an anomalous result (the sample size was so small).
Readers who feel, as MaximumPC did going into the test, that 256kbps is still too low for anything approaching real fidelity, will likely cringe at the results. I’m not cringing exactly, but do wonder why they didn’t bother to give the subjects uncompressed reference tracks to compare against.
Notes: Remember that 128kbps AAC is roughly equivalent to 160kbps MP3, since the AAC codec is more efficient. There’s apparently some suspicion that the iTunes store uses a different encoder than the one provided stock with iTunes. Testing for both bitrate and headphone differences throws variables into the mix that shouldn’t oughta be there – would have been better to give everyone the good phones and focus on the bitrates, without confusing the matter. 10 people is a pretty small sample group – not small enough to be meaningless, but not large enough for substantial findings. Not that we need MaximumPC or focus groups to tell us how to feel about codecs and bitrates…
I’ve seen a consistent falling out of sorts about compression from audiophiles. A good example is two DJ friends of mine that regularly tour. They used to only use vinyl or purchased CDs ripped to WAV. However, they’ve started to mix it up and have told me that dropping a 128kbps mp3 on one of the biggest festival sound systems in Europe was indistinguishable from dropping the vinyl. Of course, that’s far from a valid experimental design… I’d like to see this writ large N.
I participate at the Hydrogenaudio Forums and that’s where I feel the most valid codec/bitrate comparisons are done. 256kbps AAC should be completely transparent for most music. In a listening test conducted in late 2005, 128kbps iTunes AAC scored as nearly transparent with many of the samples. Most concerns I see raised about high-bitrate lossy encoding are from people who’d rather have a lossless, reference copy of the audio that they can transcode into the codec/bitrate of their choice without worrying about artifacts or missing information. If that’s the case, don’t buy lossy-encoded music!
I applauded iTunes Plus by buying my first album from iTunes. It’s definitely not my preferred method of acquiring music, but at reasonable prices for albums I think it’s a good deal.
Wow – I would not have expected that (near transparency with 128kbps AAC). The page doesn’t say how large the pool of respondents was – do you happen to know?
Listeners are listed in the sample plots. Looks like it ranged from 18-30 listeners per sample. There may have been more; I believe listeners that appear to be guessing are removed from the plots.
Ah, I see it now (was looking for something global). Hmm, still a small sample size. Would be interesting to gather up a collection of audio professionals (musicians, recording engineers, audiophiles, equipment manufacturers, etc.) and give them the exact same test.
the superior results of the apple earbuds make perfect sense to me, if that’s what the subjects were familiar with. personal reference system will often give superior results to an ostensibly higher resolution rig (this would lead into a big discussion of the problems inherent in double blind test methodology of audio if i had the time….).
similarly, the fact taht a 128k file should sound fine over a large PA system ios also unsurprising: those systems are engineered for sound delivery, not resolution.
128k is not even close to transparent, and i’ve had little trouble distinguishing 256k from 1440 in an informal dbl blind. add truly hirez formats to the mix (dsd, 24/192) and the 256 will be WOEFULLY inadequate, let alone 128. and i say this on purely sonic grounds.
wish i had the time to write more right now….
Apple iTune is what exactly in demand today. I saw apple models but I dont have yet on my own..It’s great! It can help many music lover to select their favorite tracks. How about ballad music can this device help to select all latest ballad music and the musicias.