i'm running a facebook app which currently has 300 – 600 concurrent users (and growing). To get the hardware ready for growing i changed my i7 / 12gb ram / 2x 80gb intel x25 ssd's (debian 5.0 / mysql 5.0 / 64bit) into a bi-xeon / 24gb ram / 2x 120gb intel 320 ssd (ubuntu 10.10 / mysql 5.1 / 64bit).
now i'm facing the problem that the performance is worse than on the "smaller box". On both servers i've been using nginx / php fcgi to serve the content.
i'm using innodb only, having Reads / Writes about 65% / 35%. Around 800 – 1000 qps but all Queries are simple and never join more than 1 additional table. All indexes are set and no individual query is logged in the slow log (> 2s).
At the moment i have around 400mb of data (around 1gb with indexes) expecting it to double every month.
I'd adore everyone who could give me a hint what to change to make it run smoother.
The old configuration on the i7 box was like this (mixed myisam / innodb), performed pretty good up to 800+ users.
old my.cnf
key_buffer = 3000M
max_allowed_packet = 128M
thread_stack = 192K
thread_cache_size = 8
max_connections = 400
table_cache = 8000
thread_concurrency = 16
query_cache_limit = 8M
query_cache_size = 128M
wait_timeout = 10
interactive_timeout = 10
connect_timeout = 600
low_priority_updates = 1
join_buffer_size = 8M
read_buffer_size = 2M
sort_buffer_size = 3M
myisam_sort_buffer_size = 32M
read_rnd_buffer_size = 4M
innodb_buffer_pool_size = 3G
innodb_log_buffer_size = 8M
The new configuration on the bi-xeon box is like this (pure innodb), causing high load with 300+ users. Around 30 mysql processes sitting on the top of the process list.
Disk I/O:
avg-cpu: %user %nice %system %iowait %steal %idle
36.28 0.00 1.60 0.17 0.00 61.95
my.cnf
key_buffer = 64M
max_allowed_packet = 1M
thread_stack = 192K
thread_cache_size = 128
max_connections = 500
table_cache = 512
#thread_concurrency = 10
sort_buffer_size = 256K
read_buffer_size = 256K
read_rnd_buffer_size = 256K
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_limit = 1M
query_cache_size = 128M
query_cache_type = 1
innodb_file_per_table = 1
innodb_data_file_path = ibdata1:1000M:autoextend
innodb_buffer_pool_size = 16384M
innodb_additional_mem_pool_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_support_xa = 0
innodb_lock_wait_timeout = 50
innodb_flush_method=O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 128M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 12
Best Answer
I have written some posts in the StackExchnage
Please read these for the guidance you need.
Now, for more pressing issues: You mentioned that you have 400MB of data, 1GB with indexes. That kind of scares me that your indexes are 50% bigger than the data. However, since all your data is InnoDB and you are satisfied with the current query performance, your settings are more than adequate, espscially the 16384MB of innodb_buffer_pool_size. That's 16GB. You are all set there. But wait !!! Your innodb_log_file_size is 128M ? Way too small given the 16GB buffer pool. You should resize the ib_logfile files (set the innodb_log_file_size to 2047M).
You might be experiencing load on a per-thread basis. Try setting your connection buffers (join_buffer_size ,sort_buffer_size, read_buffer_size, read_rnd_buffer_size)
From Me : Why does MySQL say I'm out of memory?
From @DTest : How do you calculate mysql max_connections variable?
Give it a Try !!!