32-bit to 64-bit skipping 48-bit

32-bit64-bitcomputer-architecturecpu-architectureoperating systems

Computer architecture upgraded from 16-bit to 32-bit to 64-bit. What was the logic for skipping 48-bit? What reasoning was used to upgrade to 64-bit and not some other exponent?

The following tables illustrates: 2^32 is 65536 times bigger than 2^16. So it seams logical to use 2^48 which is also 65536 times bigger than 2^32. Using 2^64 seems like a massive jump in comparison. (10 years after the introduction of amd64, desktop computers are sold with double digit GB RAM while servers use triple digit GB RAM.)

    2^16                        65.536
    2^32                 4.294.967.296  2^16 X 65536
    2^48           281.474.976.710.656  2^32 X 65536
    2^64    18.446.744.073.709.600.000  2^32 X 4294967296

EDIT BELOW

I used an online decimal-to-binary converter and I get these results. Apparently, 2^48 is maxed out with 48 binary 1s.

    1111111111111111                      65535  2^16 - 1 (16 ones)
    10000000000000000                     65536  2^16

    11111111111111111111111111111111                    4294967295  2^32 - 1 (32 ones)
    100000000000000000000000000000000                   4294967296  2^32

    111111111111111111111111111111111111111111111111            281474976710655 2^48 - 1 (48 ones)
    1000000000000000000000000000000000000000000000000           281474976710656 2^48

    1111111111111111111111111111111111111111111111111111111111111111    18446744073709551615    2^64 - 1 (64 ones)
    10000000000000000000000000000000000000000000000000000000000000000   18446744073709551616    2^64

Best Answer

64 bit is the next logical step up.

The reason is mainly because the step to double (or half) the number of bits is easy to handle in software and hardware for systems that operate natively in a different size. 32-bit systems where already routinely dealing with 64 bit values internally, before 64-bit CPU's became available.

E.g: A 32-bit system can easily handle a 64-bit number by storing it in 2 32-bit variables/registers.
Dealing with a 48 bit number is awkward: You would need to either user a 32-bit and a 16-bit variable together or only use part of a 32-bit variable or use 3 16-bit variables. None of these solutions for 48-bit is optimal.

In general: Any system that works in X bits can easily handle sizes of (N * X) and (X / N), where N is a power of 2. So the logic is 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and so on.
Every other size requires more complicated handling in hardware and/or software and is therefor sub-optimal.

So when going for a larger bit-size in hardware architecture it makes sense to use the same progression as it will only take a minor updates to Operation Systems, software and compilers to support the new bit-size.

(This all applies to the native bit-size for CPU registers. When you take about "number of address-lines" that address the RAM chips you may indeed see a smaller number then what is natural for the architecture.
Internally these CPU's use more bits, but not all bits are connected to real address-lines.
E.g: 20 lines on 8088 and 8086 cpu's, 20 lines on 80286 and 36 lines on Pentium II)

Related Question