SUGGESTION #1 : Use Distribution Masters
A Distribution Master is a mysql slave with log-bin enabled, log-slave-updates enabled and contains only tables with the BLACKHOLE Storage Engine. You can apply replicate-do-db to the Distribution Master and create binary logs at the Distribution Master that contains only the DB schema(s) you want binlogged. In this way you reduce the size of outgoing binlogs from the Distribution Master.
You can setup a Distribution Master as follows:
- mysqldump your database(s) using --no-data option to generate a schema-only dump.
- Load the schema-only dump to the Distribution Master.
- Convert every table in the Distribution Master to the BLACKHOLE storage engine.
- Setup replication to the Distribution Master from a master with real data.
- Add replicate-do-db option(s) to /etc/my.cnf of the Distribution Master.
For steps 2 and 3 you could also edit the schema-only dump and replace ENGINE=MyISAM and ENGINE=InnoDB with ENGINE=BLACKHOLE and then load that edited schema-only dump into the Distribution Master.
In step 3 only, if you want to script the conversion of all MyISAM and InnoDB tables to BLACKHOLE in the Distribution Master, run the following query and output it to a text file:
mysql -h... -u... -p... -A --skip-column-names -e"SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name', ENGINE=BLACKHOLE;') BlackholeConversion FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql') AND engine <> 'BLACKHOLE'" > BlackholeMaker.sql
An added bonus to scripting the conversion of table to the BLACKHOLE storage engine is that MEMORY storage engine tables are converted as well. While MEMORY storage engine table do not take up disk space for data storage, it will take up memory. Converting MEMORY tables to BLACKHOLE will keep memory in the Distribution Master uncluttered.
As long as you do not send any DDL into the Distribution Master, you can transmit any DML (INSERT,UPDATE,DELETE) you so desire before letting clients replicate just the DB info they want.
I already wrote a post in another StackExchange site that discusses using a Distribution Master.
SUGGESTION #2 : Use Smaller Binary Logs and Relay Logs
If you set max_binlog_size to something ridiculously small, then binlogs can be collected and shipped out in smaller chunks. There is also a separate option to set the size of relay logs, max_relay_log_size. If max_relay_log_size = 0, it will default to whatever max_binlog_size is set to.
SUGGESTION #3 : Use Semisynchronous Replication (MySQL 5.5 only)
Setup your main database and multiple Distribution Masters as MySQL 5.5. Enable Semisynchronous Replication so that the main database can quickly ship binlogs to the Distribution Master. If ALL your slaves are Distribution Masters, you may not need Semisynchronous Replication or MySQL 5.5. If any of the slaves, other than Distribution Masters, have real data for reporting, high availability, passive standby or backup purposes, then go with MySQL 5.5 in conjunction with Semisynchronous Replication.
SUGGESTION #4 : Use Statement-Based Binary Logging NOT Row-Based
If an SQL statement updates multiple rows in a table, Statement-Based Binary Logging (SBBL) stores only the SQL statement. The same SQL statement using Row-Based Binary Logging (RBBL) will actual record the row change for each row. This makes it obvious that transmitting SQL statements will save space on binary logs doing SBBL over RBBL.
Another problem is using RBBL in conjunction with replicate-do-db where table name has the database prepended. This cannot be good for a slave, especially for a Distribution Master. Therefore, make sure all DML does not have a a database and a period in front of any table names.
Generally, I don't add redundant columns unless I really need too.
Running a COUNT over a set of data is quite efficient in any RDBMS.
Consider this is a read over indexed (hopefully) cached data to get the count will beat the the 2nd write in to maintain the denormlaised column. This write requires more resources/locking/longer transaction etc which impact reads more
If performance becomes an issue over time, then you can pre-calculate the COUNT more efficiently using an indexed (aka materialised) view
Best Answer
The second option should be the fastest. It was made for this. Also it should have no bugs since it is used a lot and already for a long time. If you go for this then you do not even need to use a composite primary key.
In my opinion the only reason to use the first option is if you need a numbering starting from 1 per client.