We have run a PHP/MySQL blog site on a shared server for years with no problem (my code not WordPress). I'm not an system or database guy except for what I need. Our host (BlueHost) moved our site to a new server, and our capacity to serve our audience has dropped significantly. We're experiencing a seasonal spike now, so that makes it worse. So much so that we're also using another identical site to bear the load (GoDaddy). Not ideal. I don't have admin privileges obv, but I can see that we get 1000 max connections on GoDaddy and 150 on BlueHost (not sure what it was before).
I've been tracking the Threads_connected and Threads_running for both sites (again identical sites). GoDaddy never gets above ~25 connected and ~2 running. BlueHost will start to stack up Threads_connected until it hits the 150 limit mark. I'm trying not to break our resource allotment, but even with a lot of (and sometimes all) traffic redirected over to GoDaddy, BlueHost will stay above 120 for an extended period of time. Not sure if it's a caching issue or what.
I've tried doing an 'inline' declaration for timeouts when I open my PHP/PDO connection to MySQL, but I'm somewhat skeptical that it's fixing the problem.
$dbh_ip_check -> exec("SET session wait_timeout=10");
$dbh_ip_check -> exec("SET session connect_timeout=10");
These are the current global variables:
mysql> show global variables like '%timeout'; +-----------------------------+----------+ | Variable_name | Value | +-----------------------------+----------+ | connect_timeout | 20 | | delayed_insert_timeout | 40 | | have_statement_timeout | YES | | innodb_flush_log_at_timeout | 1 | | innodb_lock_wait_timeout | 50 | | innodb_rollback_on_timeout | OFF | | interactive_timeout | 10 | | lock_wait_timeout | 31536000 | | net_read_timeout | 30 | | net_write_timeout | 60 | | rpl_stop_slave_timeout | 31536000 | | slave_net_timeout | 60 | | thread_pool_idle_timeout | 60 | | wait_timeout | 3600 | +-----------------------------+----------+
We don't get a huge amount of traffic, but our spikes are probably seeing >100 people hit the site at a time, and that would continue for a half hour or more.
Any recommendations for fixes on a shared server? Thanks.
Current server variables/status:
Show global status copy/paste: https://paste.debian.net/1182642/
Show global variables: https://paste.debian.net/1182643/
ETA:
[~]# iostat -xm 5 3 Linux 4.19.150-76.ELK.el7.x86_64 (boxxxxx.bluehost.com) 01/27/21 _x86_64_ (12 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 51.92 5.08 25.79 2.47 1.35 13.39 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
[~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) 800000 scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 178732 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 800000 open files (-n) 100 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 15240 cpu time (seconds, -t) unlimited max user processes (-u) 25 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Best Answer
max_connections
is probably set to 151. And this is probably because of running a newer version of MySQL.Assuming BlueHost allows you to change the config file (and they may even have a UI for doing such), change that to, say, 1000.
Having hundreds of 'simultaneous' connections is not, in itself, a "bad" thing. However, it may be indicative of some other problem.
Do the connections hang on longer than necessary? Is there some form of 'connection pooling'? What is the client -- does it have a limit on connections to it? Are you also having CPU or other performance problems? How long does one user stay online and connected?