大家别争了...相差的地方太多了...
大家别争了...相差的地方太多了...,从文件格式(wave/sdII)到编码解码,到软件算法,到生成,pc和mac相差的地方太多了
举一个例子
sample points and sample frames
a large part of interpreting wave files revolves around the two concepts of sample points and sample frames.
a sample point is a value representing a sample of a sound at a given moment in time. for waveforms with greater than 8-bit resolution, each sample point is stored as a linear, 2's-complement value which may be from 9 to 32 bits wide (as determined by the wbitspersample field in the format chunk, assuming pcm format -- an uncompressed format). for example, each sample point of a 16-bit waveform would be a 16-bit word (ie, two 8-bit bytes) where 32767 (0x7fff) is the highest value and -32768 (0x8000) is the lowest value. for 8-bit (or less) waveforms, each sample point is a linear, unsigned byte where 255 is the highest value and 0 is the lowest value. obviously, this signed/unsigned sample point discrepancy between 8-bit and larger resolution waveforms was one of those "oops" scenarios where some microsoft employee decided to change the sign sometime after 8-bit wave files were common but 16-bit wave files hadn't yet appeared.
because most cpu's read and write operations deal with 8-bit bytes, it was decided that a sample point should be rounded up to a size which is a multiple of 8 when stored in a wave. this makes the wave easier to read into memory. if your adc produces a sample point from 1 to 8 bits wide, a sample point should be stored in a wave as an 8-bit byte (ie, unsigned char). if your adc produces a sample point from 9 to 16 bits wide, a sample point should be stored in a wave as a 16-bit word (ie, signed short). if your adc produces a sample point from 17 to 24 bits wide, a sample point should be stored in a wave as three bytes. if your adc produces a sample point from 25 to 32 bits wide, a sample point should be stored in a wave as a 32-bit doubleword (ie, signed long). etc.
furthermore, the data bits should be left-justified, with any remaining (ie, pad) bits zeroed. for example, consider the case of a 12-bit sample point. it has 12 bits, so the sample point must be saved as a 16-bit word. those 12 bits should be left-justified so that they become bits 4 to 15 inclusive, and bits 0 to 3 should be set to zero. shown below is how a 12-bit sample point with a value of binary 101000010111 is formatted left-justified as a 16-bit word.
___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___
| | | | | | | | | | | | | | | | |
| 1 0 1 0 0 0 0 1 0 1 1 1 0 0 0 0 |
|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|
<---------------------------------------------> <------------->
12 bit sample point is left justified rightmost
4 bits are
zero padded
but note that, because the wave format uses intel little endian byte order, the lsb is stored first in the wave file as so:
___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___
| | | | | | | | | | | | | | | | | |
| 0 1 1 1 0 0 0 0 | | 1 0 1 0 0 0 0 1 |
|___|___|___|___|___|___|___|___| |___|___|___|___|___|___|___|___|
<-------------> <-------------> <----------------------------->
bits 0 to 3 4 pad bits bits 4 to 11
for multichannel sounds (for example, a stereo waveform), single sample points from each channel are interleaved. for example, assume a stereo (ie, 2 channel) waveform. instead of storing all of the sample points for the left channel first, and then storing all of the sample points for the right channel next, you "mix" the two channels' sample points together. you would store the first sample point of the left channel. next, you would store the first sample point of the right channel. next, you would store the second sample point of the left channel. next, you would store the second sample point of the right channel, and so on, alternating between storing the next sample point of each channel. this is what is meant by interleaved data; you store the next sample point of each of the channels in turn, so that the sample points that are meant to be "played" (ie, sent to a dac) simultaneously are stored contiguously.
the sample points that are meant to be "played" (ie, sent to a dac) simultaneously are collectively called a sample frame. in the example of our stereo waveform, every two sample points makes up another sample frame. this is illustrated below for that stereo example.
sample sample sample
frame 0 frame 1 frame n
_____ _____ _____ _____ _____ _____
| ch1 | ch2 | ch1 | ch2 | . . . | ch1 | ch2 |
|_____|_____|_____|_____| |_____|_____|
_____
| | = one sample point
|_____|
for a monophonic waveform, a sample frame is merely a single sample point (ie, there's nothing to interleave). for multichannel waveforms, you should follow the conventions shown below for which order to store channels within the sample frame. (ie, below, a single sample frame is displayed for each example of a multichannel waveform).
channels 1 2
_________ _________
| left | right |
stereo | | |
|_________|_________|
1 2 3
_________ _________ _________
| left | right | center |
3 channel | | | |
|_________|_________|_________|
1 2 3 4
_________ _________ _________ _________
| front | front | rear | rear |
quad | left | right | left | right |
|_________|_________|_________|_________|
1 2 3 4
_________ _________ _________ _________
| left | center | right | surround|
4 channel | | | | |
|_________|_________|_________|_________|
1 2 3 4 5 6
_________ _________ _________ _________ _________ _________
| left | left | center | right | right |surround |
6 channel | center | | | center | | |
|_________|_________|_________|_________|_________|_________|
the sample points within a sample frame are packed together; there are no unused bytes between them. likewise, the sample frames are packed together with no pad bytes.
note that the above discussion outlines the format of data within an uncompressed data chunk. there are some techniques of storing compressed data in a data chunk. obviously, that data would need to be uncompressed, and then it will adhere to the above layout.
采样点和采样帧,wave和sdII,aiff就不一样...
文件的位长度也不一样...纪录的精度也不一样啊...
声音也就有区别啊...
照此说来...千差万别啊
再说系统上的区别
从系统逻辑架构到物理架构也是不同的,cpu的处理方式(线程,流水深度,并发处理等等)也是不一样的...
种种不一样产生的运算结果也会是不一样的...
说的那个一点...比较同样时间内运算出来的photoshop画质...可比性在那里啊?...
还是老话
不管黑猫白猫,能拽到老鼠的就啊是好猫!