Postgresql – How to disable Postgres from listening on IPv6

configurationpostgresqlpostgresql-9.3tcpip

I am trying to disable Postgres from listening on IPv6 because of a bunch of strange trace files that are constantly being written to syslog.

I'd prefer not to disable IPv6 in the OS itself, so after seeing mention of the trace messages, and finding the blog post Disable IPv6 Postgres and PGBouncer, I have followed step 2 and updated postgresql.conf (changing "*" to "0.0.0.0"):

# grep listen postgresql.conf
listen_addresses = '0.0.0.0'

I also found this entry in pg_hba.conf and commented it out:

# IPv6 local connections:
#host    all             all             ::1/128                 trust

I restarted Postgres. netstat seems to show that it's not listening on IPv6 any more:

# netstat -ntl | grep 5432
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN

It seems that lsof has a different idea:

# lsof -i6 | grep postgres
postmaste 37921 postgres    8u  IPv6 39125550      0t0  UDP 
localhost:38892->localhost:38892
postmaste 37944 postgres    8u  IPv6 39125550      0t0  UDP 
localhost:38892->localhost:38892
postmaste 37945 postgres    8u  IPv6 39125550      0t0  UDP 
localhost:38892->localhost:38892
<snip>

Could this be a bug? Or am I missing something obvious here?

System is:
Oracle Linux 6.9
Kernel 3.8.13-118.20.2.el6uek.x86_64
PostgreSQL 9.3.21

Trace file from syslog:

Mar  9 13:36:35 atlassiandb100 kernel: ------------[ cut here ]------------
Mar  9 13:36:35 atlassiandb100 kernel: WARNING: at net/core/dst.c:285 dst_release+0x79/0x80()
Mar  9 13:36:35 atlassiandb100 kernel: Hardware name: PowerEdge R630
Mar  9 13:36:35 atlassiandb100 kernel: Modules linked in: ipmi_si dell_rbu nfsv3 nfs_acl nfsv4 auth_rpcgss nfs fscache lockd sunrpc 8021q garp stp llc bonding ipv6 ext3 jbd ext2 dm_queue_length dm_multipath dcdbas shpchp sg bnx2x ptp pps_core libcrc32c mdio coretemp hwmon kvm_intel kvm microcode pcspkr ipmi_devintf ipmi_msghandler ext4 jbd2 mbcache sd_mod crc_t10dif ghash_clmulni_intel crc32c_intel aesni_intel ablk_helper cryptd lrw aes_x86_64 xts gf128mul ahci libahci qla2xxx scsi_transport_fc scsi_tgt megaraid_sas mxm_wmi wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded: ipmi_si]
Mar  9 13:36:35 atlassiandb100 kernel: Pid: 35856, comm: postmaster Tainted: G        W    3.8.13-118.20.2.el6uek.x86_64 #2
Mar  9 13:36:35 atlassiandb100 kernel: Call Trace:
Mar  9 13:36:35 atlassiandb100 kernel: <IRQ>  [<ffffffff8106149f>] warn_slowpath_common+0x7f/0xc0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff810a0b79>] ? update_rq_runnable_avg+0xd9/0x1d0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff810614fa>] warn_slowpath_null+0x1a/0x20
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814dcab9>] dst_release+0x79/0x80
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa084c0c9>] udpv6_queue_rcv_skb+0x2b9/0x370 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa084c725>] __udp6_lib_rcv+0x255/0x600 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff81096a15>] ? ttwu_do_wakeup+0x45/0x100
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa084cae5>] udpv6_rcv+0x15/0x20 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0833929>] ip6_input_finish+0x179/0x3a0 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0833ba8>] ip6_input+0x58/0x60 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa08332eb>] ip6_rcv_finish+0x2b/0xb0 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0833630>] ipv6_rcv+0x2c0/0x440 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814d7dde>] __netif_receive_skb+0x56e/0x770
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff81096875>] ? scheduler_tick+0x115/0x150
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814d80e3>] process_backlog+0x103/0x1f0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814d8905>] net_rx_action+0x105/0x2b0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff8106a0f7>] __do_softirq+0xd7/0x240
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff815ae03c>] ? call_softirq+0x1c/0x30
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff815ae03c>] call_softirq+0x1c/0x30
Mar  9 13:36:35 atlassiandb100 kernel: <EOI>  [<ffffffff810174f5>] do_softirq+0x65/0xa0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff81069f94>] local_bh_enable+0x94/0xa0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814d6f01>] dev_queue_xmit+0x1a1/0x410
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0831f1a>] ip6_finish_output2+0xfa/0x350 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814caf3d>] ? csum_partial_copy_fromiovecend+0x18d/0x250
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0832208>] ip6_finish_output+0x98/0xc0 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa08322a8>] ip6_output+0x78/0xb0 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff81510e8d>] ? ip_generic_getfrag+0x3d/0xb0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0832329>] ip6_local_out+0x29/0x30 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa08325fa>] ip6_push_pending_frames+0x2ca/0x450 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa0849ac8>] udp_v6_push_pending_frames+0x168/0x410 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffffa084a7f5>] udpv6_sendmsg+0x7e5/0xc10 [ipv6]
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff81544d65>] inet_sendmsg+0x45/0xb0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814bd550>] sock_sendmsg+0xb0/0xe0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff8113c51e>] ? free_pages+0x3e/0x40
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff8115c042>] ? tlb_finish_mmu+0x32/0x50
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff8116353a>] ? unmap_region+0xea/0x110
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff814bd6d9>] sys_sendto+0x159/0x1c0
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff812a9a97>] ? __percpu_counter_add+0x67/0x80
Mar  9 13:36:35 atlassiandb100 kernel: [<ffffffff815aba8c>] system_call_fastpath+0x16/0x1b
Mar  9 13:36:35 atlassiandb100 kernel: ---[ end trace 3b257d5567b7ecd7 ]---
Mar  9 13:36:35 atlassiandb100 kernel: ------------[ cut here ]------------

I am still receiving this, several times per second. It does not appear to be a hardware issue (nothing else indicates hardware issues), and as soon as Postgres is stopped, the errors stop.

The message is pretty much exactly what is in the blog post I linked above which is how I arrived at the IPv6 thing. I'm just very confused and frustrated because even though I tell postgresql.conf to listen on the IPv4 address only, lsof -i6 shows me Postgres processes that are bound to IPv6.

telnet is refused on the IPv6 connections:

telnet: connect to address ::1: Connection refused

I also took a look via netstat -a and I only see the IPv4 listening via TCP and a unix STREAM.

Startup does mention a port but I don't see anything else that would indicate IPv6:

postgres  37921  0.0  0.5 64539596 1441588 ?    S    Mar06   2:32
    /usr/pgsql-9.3/bin/postmaster -p 5432 -D /var/lib/pgsql/9.3/data

We are patched for Spectre and Meltdown, running

kernel-uek-3.8.13-118.20.2.el6uek.x86_64

I'd rather not disable IPv6 completely in the OS, as we maintain thousands of servers and want to avoid "snowflake" servers as much as we can. If we need to, we will go that route.

Best Answer

Thanks for the suggestions everyone. It hadn't clicked in my brain that the trace messages were caused by UDP communications. That led me to this blog post. It's not exactly my issue, but it led me ultimately to the solution:

PostgreSQL uses the POSIX function getaddrinfo(3) to resolve localhost.

Testing the getaddrinfo function proved that it was returning ::1 before it was returning 127.0.0.1. I ended up commenting out the IPv6 line in /etc/hosts and re-starting Postgres. Voila! No more crazy trace messages in syslog.