When a child is forked then it inherits parent's file descriptors, if child closes the file descriptor what will happen ?
It inherits a copy of the file descriptor. So closing the descriptor in the child will close it for the child, but not the parent, and vice versa.
If child starts writing what shall happen to the file at the parent's end ? Who manages these inconsistencies , kernel or user ?
It's exactly (as in, exactly literally) the same as two processes writing to the same file. The kernel schedules the processes independently, so you will likely get interleaved data in the file.
However, POSIX (to which *nix systems largely or completely conform), stipulates that read()
and write()
functions from the C API (which map to system calls) are "atomic with respect to each other [...] when they operate on regular files or symbolic links". The GNU C manually also provisionally promises this with regard to pipes (note the default PIPE_BUF
, which is part of the proviso, is 64 kiB). This means that calls in other languages/tools, such as use of echo
or cat
, should be included in that contract, so if two indepenedent process try to write "hello" and "world" simultaneously to the same pipe, what will come out the other end is either "helloworld" or "worldhello", and never something like "hweolrllod".
when a process call close function to close a particular open file through file descriptor.The file table of process decrement the reference count by one.But since parent and child both are holding the same file(there refrence count is 2 and after close it reduces to 1)since it is not zero so process still continue to use file without any problem.
There are TWO processes, the parent and the child. There is no "reference count" common to both of them. They are independent. WRT what happens when one of them closes a file descriptor, see the answer to the first question.
Check that /etc/ssh/sshd_config
contains:
UsePAM=yes
and that /etc/pam.d/sshd
contains:
session required pam_limits.so
Still no answer to why 1048576 is max.
The 1048576 seems to be per process. So by having multiple processes this limit can be overcome.
Best Answer
file-max
is the maximum number of files that can be opened across the entire system. This is enforced at the kernel level.The man page for
lsof
states that:This is consistent with your observations, since the number of files as reported by
lsof
is well below thefile-max
setting.ulimit
is used to enforce resource limits at a user level. The parameter 'number of open files' is set at the user level, but is applied to each process started by that user. In this case, a single Kafka process can have up to 1024 file handles open (soft limit).You can raise this limit on your own up to the hard limit, 4096. To raise the hard limit, root access is required.
If Kafka is running as a single process, you could find the number of files opened by that process by using
lsof -p [PID]
.Hope this clears things up.