I know different lossy audio compression techniques use different codecs and this is not to determine which is better or worse. Do you divide the bitrate by the number of channels to determine the loss (or compression) ? Here are 3 generic codecs (names changed to protect the innocent): Codec A; 5.1 max. bitrate 1.509 = 250 bits/channel ? Codec B; 5.1 max. bitrate 448 = 75 bits/channel ? Codec C; 2.0 max. bitrate 160 = 80 bits/channel ?