I think the confusion comes from the fact that the underlying system call that ulimit wraps is called setrlimit.
excerpt from the ulimit man page
The ulimit() function shall control process limits. The process limits
that can be controlled by this function include the maximum size of a
single file that can be written (this is equivalent to using
setrlimit() with RLIMIT_FSIZE).
Additionally if you look at the setrlimit
man page the underlying data structure which contains the limit information is called rlimit
.
excerpt from the setrlimit man page
getrlimit and setrlimit get and set resource limits respectively. Each
resource has an associated soft and hard limit, as defined by the
rlimit structure (the rlim argument to both getrlimit() and
setrlimit()):
struct rlimit {
rlim_t rlim_cur; /* Soft limit */
rlim_t rlim_max; /* Hard limit (ceiling
for rlim_cur) */
};
References
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences.
Let's take the wayback machine to the early 1990s, when Linux was being created. Back in those days, we'd have arguments about whether Linux could be made to run in 2 MiB of RAM or if it really needed 4 whole MiB. Of course, the high-end snobs were all sneering at us, with their 16 MiB monster servers.
What does that amusing little vignette have to do with anything? In that world, it's easy to make decisions about how to divide up the 4 GiB address space you get from simple 32-bit addressing. Some OSes just split it in half, treating the top bit of the address as the "kernel flag": addresses 0 to 231-1 had the top bit cleared, and were for user space code, and addresses 231 through 232-1 had the top bit set, and were for the kernel. You could just look at the address and tell: 0x80000000 and up, it's kernel-space, otherwise it's user-space.
As PC memory sizes ballooned toward that 4 GiB memory limit, this simple 2/2 split started to become a problem. User space and kernel space both had good claims on lots of RAM, but since our purpose in having a computer is generally to run user programs, rather than to run kernels, OSes started playing around with the user/kernel divide. The 3/1 split is a common compromise.
As to your question about physical vs virtual, it actually doesn't matter. Technically speaking, it's a virtual memory limit, but that's just because Linux is a VM-based OS. Installing 32 GiB of physical RAM won't change anything, nor will it help to swapon
a 32 GiB swap partition. No matter what you do, a 32-bit Linux kernel will never be able to address more than 4 GiB simultaneously.
(Yes, I know about PAE. Now that 64-bit OSes are finally taking over, I hope we can start forgetting that nasty hack. I don't believe it can help you in this case anyway.)
The bottom line is that if you're running into the 1 GiB kernel VM limit, you can rebuild the kernel with a 2/2 split, but that directly impacts user space programs.
64-bit really is the right answer.
Best Answer
It says right there in the article:
The setrlimit man page says:
So it stopped working in 2.4.30. The changelog for 2.4.30 says something about this: