Linux – FILE size limitation according to Robert Love’s textbook

fileslimitlinuxsystem-programming

From Robert Love's Linux System Programming (2007, O'Reilly), this is what is given in the first paragraph (Chapter 1, Page 10):

The file position’s maximum value is bounded only by the size of the C type used to store it, which is 64-bits in contemporary Linux.

But in the next paragraph he says:

A file may be empty (have a length of zero), and thus contain no valid bytes. The maximum file length, as with the maximum file position, is bounded only by limits on the sizes of the C types that the Linux kernel uses to manage files.

I know this might be very, very basic, but is he saying that the file size is limited by the FILE data type or the int data type?

Best Answer

He's saying it's bound by a 64-bit type, which has a maximum value of (2 ^ 64) - 1 unsigned, or (2 ^ 63) - 1 signed (1 bit holds the sign, +/-).

The type is not FILE; it's what the implementation uses to track the offset into the file, namely off_t, which is a typedef for a signed 64-bit type.1 (2 ^ 63) - 1 = 9223372036854775807. If a terabyte is 1000 ^ 4 bytes, that's ~9.2 million TB. Presumably the reason a signed type is used is so that it can hold a value of -1 (for errors, etc), or a relative offset.

Functions like fseek() and ftell() use a signed long, which on 64-bit GNU systems is also 64-bits.


1. See types.h and typesizes.h in /usr/include/bits.

Related Question