There are some options you will need to consider
InnoDB Buffer Pool
The reason 26G was picked is that you have 32GB of RAM and 80% of that is 25.6 G. Since you mentioned that you will have 100 databases and 100 applications making this a multitenant DB Server, you are going to have to get the InnoDB Buffer Pool just right.
Please run this query:
SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size" FROM
(SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM
information_schema.tables WHERE table_schema NOT IN
('mysql','information_schema','performance_schema') AND
engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
This will tell you how much space is currently occupied by your MySQL instance. Whatever the total of InnoDB Data Size and Index Size is, that is what you use. If that total is over the 80% limit then you must you the 80% (leaving innodb_buffer_pool_size at 26G).
Since you have a quad-core server, set innodb_buffer_pool_instances to 4.
InnoDB Transaction Log Files
Since 26G was selected as innodb_buffer_pool_size, you are going to need the biggest possible transaction logs. The value 512M was probably picked for innodb_log_file_size because there is nothing to suggest the amount of transaction data (in bytes) that will actually be processed.
To resize your transaction logs
mysql -u... -p... -e"SET GLOBAL innodb_fast_shutdown = 0;"
service mysql stop
Next edit my.cnf
, replacing
innodb_log_file_size = 512M
with this
innodb_log_file_size = 2047M
Then, replace the transaction logs like this
mv /var/lib/mysql/ib_logfile0 /var/lib/mysql/ib_logfile0.bak
mv /var/lib/mysql/ib_logfile1 /var/lib/mysql/ib_logfile1.bak
service mysql start
After a few months of peak activity, you could then run this query during a peak:
SET @TimeInterval = 300;
SELECT variable_value INTO @num1 FROM information_schema.global_status
WHERE variable_name = 'Innodb_os_log_written';
SELECT SLEEP(@TimeInterval);
SELECT variable_value INTO @num2 FROM information_schema.global_status
WHERE variable_name = 'Innodb_os_log_written';
SET @ByteWrittenToLog = @num2 - @num1;
SET @KB_WL = @ByteWrittenToLog / POWER(1024,1) * 3600 / @TimeInterval;
SET @MB_WL = @ByteWrittenToLog / POWER(1024,2) * 3600 / @TimeInterval;
SET @GB_WL = @ByteWrittenToLog / POWER(1024,3) * 3600 / @TimeInterval;
SELECT @KB_WL,@MB_WL,@GB_WL;
Based on what comes back, you should resize the transaction logs again.
Please see my earlier posts on doing this:
Multicore Engagement
When the InnoDB Plugin was introduced in MySQL 5.1.38, it set the world of MySQL on fire. Why? Because InnoDB was single threaded. You had to install the plugin to have new setting that allowed InnoDB to use multiple cores.
Rather than writing something lengthy, please read my earlier posts of tweaking MySQL 5.5 to have InnoDB utilitze multiple cores:
You can simplify your query to something along these lines. I expect MySQL will generate a simpler execution plan.
SELECT date(date) period, count(*) clicks
FROM visits
WHERE url_id = 3
AND DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )
GROUP BY period;
If returning results on high-traffic links in one second is a hard requirement, you might need to upgrade your hardware.
Expect to need an index on each column used in a WHERE clause. You might benefit from some multicolumn indexes; {url_id, date} is a candidate.
Test at scale, if that's at all possible. (It's usually possible, although it might take some time.) Use EXPLAIN to see what MySQL is doing with your queries.
You don't have to query browsers, countries, and everything else all at one time. When I was running web development, I rarely looked at countries--they weren't relevant to the niche I was working in. Also consider other asynchronous UI technologies.
PostgreSQL has a better optimizer. It might work better than MySQL. Test it.
Best Answer
I have administered both MySQL and DB2 database systems. In general, I'd say that DB2 requires more memory and more administration. But sometimes, you can use DB2's sophisticated features to perform queries which are much more efficient.
About memory and administration: DB2 is not very good at making good use of RAM. Either you spend a lot of time fine-tuning its memory config, or you use its automatic memory configuration ("STMM") and accept that it will waste a depressingly high percentage of your RAM. And DB2 has a bunch of other small annoyances which add up to a need for significant need for administration. Example: If you create a database, it will not be automatically "activated" at boot time, meaning that the first connection to it will take forever (a script can help you, but why-oh-why is that needed).
About sophisticated features: If you take the (potentially long) time to study some DB2 features such as index-only scans, OLAP queries, semantic query optimizations (where the optimizer can sometimes take advantage of constraints defined in the tables), materialized views, etc, then there are situations where you can make things go orders of magnitude faster compared to a simpler database system. And if/when you make something run much faster through design and queries, then you can cut down on CPU, RAM and I/O requirements.
If you have large amounts of data, then you also need to think about backups: Do you need to back up your data? If so, what options do the database systems provide? - There is not much fun about an otherwise fast system which is often held back due to long running and I/O-saturating backups. DB2's backup+restore has many annoying characteristics, but it does also have interesting features such as the ability to split data into hot+cold data (via separation into tablespaces), incremental or delta backups, etc. Last time I looked at MySQL, the backup options were less sophisticated.