A broad question. Perhaps someone can weigh in on your question about the kernel TCP stack between specific kernel versions.
A couple of general answers:
From the client side
In the event of a SIGKILL
signal, the kernel terminates program execution, and among other things, closes the process's open file descriptors. TCP sockets are handled a bit differently by the kernel than regular files, in that they need to be flushed and go through the TCP shudown process.
The difference in an immediate 'FIN, ACK' from the client and a longer socket shut down could depend on what state the client TCP connection was in when the client application was terminated. But generally the kernel will close the application's open sockets.
From the server side
A server does not always know when a client disconnects. The most reliable way to determine if a client has hung up is to attempt a read from the socket which returns an EOF.
TCP is designed to be resilient to connection latency and intermittent failures. This also translates to some challenges reliably detecting disconnections in the event the FIN, ACK
4 way disconnection handshaking does not occur.
Summary
What you might be seeing upon a kill -9
of the client is the server going into the CLOSE_WAIT
of the TCP state, where it is waiting for a TCP timeout. This can take a while. Even though the client is gone, if the kernel did not handle the TCP disconnection handshaking, the server will have to timeout.
If I remember this can take several additional seconds and is likely why you still see ESTABLISHED
due to running both the client and server on the same host.
The tool you are looking for is socat
. Since you are testing one single web server, you can ask it to establish a permanent connection to that server (as long as it doesn't choose to close it -- adjust your timeouts accordingly) and once this is done, you can use the connection as a tunnel. Below are two ways to accomplish this.
Querying through a Unix socket
curl
has a --unix-socket
option that allows you to send HTTP requests and receive HTTP replies through a Unix socket (thank you, thrig, for your enlightening comment)
You would use it like this:
socat TCP:10.5.1.1:80 UNIX-LISTEN:/tmp/curl.sock,fork
Then, on another terminal:
curl --unix-socket /tmp/curl.sock http://10.5.1.1/message1.txt
curl --unix-socket /tmp/curl.sock http://10.5.1.1/message2.txt
...
Querying through a pseudo-HTTP-proxy
You can also make your tunnel available as a pseudo-proxy through which wget
, curl
,... would connect. This solution has the advantage to not be limited to curl
.
This time, socat
listens to a local TCP port (say 3128):
socat TCP:10.5.1.1:80 TCP-LISTEN:3128,fork,reuseaddr
Then, on another terminal:
export http_proxy='http://localhost:3128'
curl http://10.5.1.1/message.txt
wget http://10.5.1.1/message.txt
....
Note that since the HTTP client is using a proxy, the HTTP request will be slightly altered and this may not be desirable.
Of course, none of those two solutions are intended to be used with multiple HTTP servers since this is your web-server at the end of the channel that receives all the requests.
Best Answer
TCP keepalive. Rips down connections if they're unused, after 2 hours. Can be easily changed. See http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
In a nutshell, kernel tunable "tcp_keepalive_time" which is exposed via
/proc/sys/net/ipv4/tcp_keepalive_time
can be changed from the default 7200 as required.