The "Cannot assign requested address" part in the error message comes from the kernel TCP stack. When encountered intermittently, this typically means that the space of available sockets is exhausted because of too much sockets in wait state (TIME_WAIT
, or less probably FIN_WAIT_1
or FIN_WAIT_2
)
The range of socket ports can be output by cat /proc/sys/net/ipv4/ip_local_port_range
. The default value on a stock Linux kernel is generally 32768 61000
.
You may check the result of netstat -ton|grep WAIT
on the client(s) and on the pgBouncer's host when the system is busy. The -o
flag will show the timeout
counters related to wait states.
If the total number of TCP sockets is close to 61000-32768=28232
then exhaustion of this range is likely your problem. Since a closed socket spends 60 seconds in TIME_WAIT
state in normal condition, if a client host connects more than 28232 times in one minute, new connections will fail with the mentioned error until ports are freed.
As a first workaround, the TCP ports range may be extended:
# echo "1025 65535" >/proc/sys/net/ipv4/ip_local_port_range
If it's not satisfactory, check the tcp_tw_recycle
and tcp_tw_reuse
flags, also tunable through /proc/sys/net/ipv4
and sysctl
.
They're defined as (from man tcp
):
tcp_tw_recycle (Boolean; default: disabled; since Linux 2.4)
Enable fast recycling of TIME_WAIT sockets. Enabling this
option is not recommended since this causes problems when work‐
ing with NAT (Network Address Translation).
tcp_tw_reuse (Boolean; default: disabled; since Linux 2.4.19/2.6)
Allow to reuse TIME_WAIT sockets for new connections when it is
safe from protocol viewpoint. It should not be changed without
advice/request of technical experts.
Personally I had success with tcp_tw_recycle
when faced with this problem with a MySQL client app, but don't take this as a recommendation, my understanding of TCP being superficial at best.
In short - no
You can show stats;
to get the total_received
and total_sent
per database. Assuming you can check the stats before query and after, calculating the difference won't give you the query result size. Even in ideal env, when no other parallel sessions exist. Eg:
t=# copy (select id from t) to '/tmp/1';
COPY 1043482
t=# \! du -h /tmp/1
7.0M /tmp/1
checking received:
t=# select 40493353-40493297;
?column?
----------
56
(1 row)
hm, so the size of a query result is 7MB, but received bytes are 56. Ah! I saved result of query on server! So pgbouncer technically indeed did not receive large set of data, ok - do it to client:
t=# \copy (select objectid from pond_user) to '/tmp/1';
COPY 1043482
t=# select 40493442-40493353;
?column?
----------
89
(1 row)
Same story... Maybe lets check bytes sent?..
t# select (355612622-343149820)/(1024*1024);
?column?
----------
11
(1 row)
11MB... Not particularly precise match.
SO:
pgbouncer does not log the result set size, You can use show stats
assuming you can isolate sessions in time and your estimations are to get a very rough approximation for sizes.
also
People are using iptraf
or such for the same, Somebody logs long taking queries with log_min_duration_statement
and then repeat them with CREATE TABLE AS
to get the size, some even write patches for client.
even also
Analytic approach here leads to even more complications:
https://stackoverflow.com/questions/13570613/making-sense-of-postgres-row-sizes
Measure the size of a PostgreSQL table row
Although mycrosoft and mysql offers such approximation in cosy way
Best Answer
Until now, pgBouncer doesn't support rotate log. Hence, you have to do it by yourself.
You can refer to sites below:
How to rotate PgBouncer logs in Linux/Windows ?
How To Manage Log Files With Logrotate On Ubuntu 12.10