[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Big and little endian issue
At 3:08 PM -0800 4/27/98, chefren wrote:
>On 27 Apr 98 at 8:32, Philip Ronzone wrote:
>> An IDE or SCSI controller has no knowledge of, and doesn't care,
>> and COULDN'T care even if it wanted to, whether the data to/from
>> the disk is BE, LE, mixed, or all ASCII text.
>> Since it doesn't know, how in the heck can it justify changing
>> byte ordering?
>This seems like a classic software-hardware clash. The
>hardware guys cannot immagine how their bridge itself could
>swap bytes and because they cannot immagine it they don't
>want to pass control to software guys. If the hardware of
>all PCI controllers would have a standard BE/LE swap bit
>the software guys could write drivers and get rid of the
>whole problem in a very short time.
>Would such a bit have cost more than a $0.01 for each
>PCI interface??? And aren't there things of $0.10 that
>better could have been left out?
Actually this is a system architect vs a hardware designer clash.
The true cost is not the extra gates and datapaths of a byte-swapper, it is
the complexity they introduce.
(Because to get correct results the software has to know the operating mode
of those gates at the time the data passed through, thus making the bits in
the control registers that control the swapping (or not) into part of the
state-descriptor of the machine, thus e.g. forcing certain whole sequences
of code in some systems to be run with interrupts disabled in order to
ensure that the context can't get messed up by an interrupt routine that
needs to access a status bit through the same bridge, etc etc., long chains
of unexpected consequences that eventually follow from the implications. A
typical scenario is the system's realtime performance gets intolerably bad
due to the long latencies caused by those disabled-interrupt sequences, so
someone optimizes the code without understanding why it had to be that way,
makes a test and finds the realtime performance is now great, doesn't
realize he's just introduced a very subtle bug that will occasionally
scramble some of his data, but only under unusual conditions.)
Hardware designers naturally solve the problems they see by adding
hardware, and resolve arguments by adding mode bits that allow the hardware
to satisfy all parties (but of course not at the same time).
The subtle system errors that are a consequence of adding all those degrees
of freedom can have enormous effects downstream, like making the software
late or worse yet making it fail under certain rare timings of events, so
that customers lose faith in the product. Such things can wipe out whole
I am firmly backing Ronzone in this argument, he's seeing the bigger picture.
Been there, done that, first as a hardware then a software and then a