Comparision of different formats

In this section we will look at how various types of audio compression affect the frequency distribution. All conversions done with iTunes encoder, which has the best sounding results. Screen recordings done with CamStudio. The audio quality is bad because what you're hearing is coming from my speakers, across the room, going into my computer microphone. The formats we're looking at are MP3 at 128 and 320 kbps, AAC (M4A) at 256 (iTunes Plus) and 320 kbps, and of course the original WAV.

The original WAV

The only parts of this graph we really need to pay attention to are the extreme right, starting 16000Hz. The rest of it is only noticeably different at extremely low bitrates, which you should be able to detect by ear anyway. This is what a normal WAV should look like:

MP3 128kbps compression

The difference should be plenty obvious. Most everything above 16000Hz is gone. Audiochecker should always be able catch these, they will sound noticeably muddy, and this effect should be visible no matter what settings you have the analysis on. It will be visible even in any software's "spectral view" also. Depending on the encoder and file, sometimes it will be completely gone, sometimes there are small "jumps" as seen here:

MP3 320kbps

The effects, when compare to the wave, should be plenty easy to spot here too. However, this is the point where it's more difficult to notice in things like Spectral Views, or other less clear types of graph.

AAC 256kbps

This is what iTunes Plus uses. The spectrum actually looks better than MP3s but in my opinion sounds worse. The lost frequencies are all at the very high end. This is where this moving graph really comes handy, as I don't think this compression would be easily noticeable with any other method. It should be noted that the drop is easier to spot during sections with less very high frequencies - pay special attention when the chorus ends and Christina Aguilera starts singing, suddenly it's more noticeable.

AAC 320kbps

Not very different from 256. I've added a second video showing the very end of the song, as the final hit fades out, where the signs of compression are most visible. However, be careful if the end is the only sign of compression, it's possible especially from more recent tracks that the final hit was sampled from a source that was compressed and the rest of the track is lossless. A lot of producers today simply don't care what they are sampling from.

What do I look for exactly?

Now that we've seen how different types of compression change the graph, let's examine what is and what isn't a sign of lossy audio. Click here to go to the next page.