I don't have experience with DynamoDB so the best I can do is offer some pointers on optimizations to look for if you stay in the hosted MySQL world.
- Memory allocation. If you're using innodb be sure you have adequate buffer pool size with two log files as the lesser of 25% buffer pool size or 1 Gig. In a perfect world your buffer pool would be as large as all the .ibd files (aka sum(data_length+index_length) from information_schema.tables where engine='innodb'). Understandably this is not always possible, in which case try to allocate 60-70% of your system memory to buffer pool while leaving the rest available for disk cache. (This of course assumes this is a dedicated MySQL server not hosting apache or running your php scripts).
If you're running MyISAM tables then at least make sure your key_buffer_size can accommodate the index_length (or size of your .MYI files).
Disks. If you're needing to fall back to disk you want those to be as fast as possible. In a perfect world you'd have PCI-E based SSD cards. I imagine this is out of your budget if you're worried about charging after switching to amazon, so maybe see if your hosting provider has any options above what you currently have. SSD based SAS drives > 15k hdd sas > nl-sas.
Service layer between mysql and clients. There is memory overhead to each connection to the database that could be used toward buffer pool or disk cache. Try to have a service layer that maintains a bounded connection pool and providing application layer caching where possible.
Distribute the work load. This is tantamount to going to the "cloud" just that you're managing all machines yourself. If your app is mostly read heavy and not always required to be up to the microsecond up to date you can setup replication between many physical machines and load balance the reads across them.
Look toward getting slaves setup at different physical datacenters across the country/world/what ever's applicable to your demographic. Load balance connections to the nearest one. (This assumes traffic is first routed to your front end in these locations otherwise the problem might just be worsened).
- Partitioning. This is especially true if you're using myisam and dealing with table level locks. It's also particularly useful if you have a table that has patterns of pruning old rows periodically (you can just drop partitions instead of running individual delete queries which fragment the data).
Something like this:
SELECT coalesce(CONCAT('Component ', t.component), 'Total for all Components') AS __group__,
CASE
WHEN t.id > 0 THEN CASE t.status WHEN 'closed' THEN 'color: #777; background: #ddd; border-color: #ccc;' END
ELSE 'font-weight: bold'
END AS __style__,
t.id AS ticket,
case when t.id > 0 then t.summary else 'Total' end AS summary,
SUM( coalesce(w.endtime, current_timestamp) - w.starttime) AS total
FROM ticket t
INNER JOIN work_log w ON t.id = w.ticket
GROUP BY t.component, t.id, t.summary, t.status
The sum()
will sum up the difference in milliseconds between endtime
and starttime
not sure what the intention behind that is.
I'm not entirely sure what the rollup does, but something along the lines:
with base_data as (
... the above query goes here ...
)
select __group__, ticket, __style__, summary, total
from base_data
union all
select null, null, null, null, sum(total)
from base_data
group by __group__, ticket, __style__, summary
might get you started.
Best Answer
The answer is no, ANALYZE TABLE privilege is implicit if the user has SELECT and INSERT privilege on the table. There's no way to use access privileges alone to revoke privilege to run ANALYZE TABLE.
Are you sure the client is explicitly running ANALYZE TABLE statements? It's possible they are simply running SHOW TABLE STATUS or quering INFORMATION_SCHEMA.TABLES or INFORMATION_SCHEMA.STATISTICS. If so, then this might help:
For InnoDB tables, the work done by ANALYZE TABLE is identical to those metadata queries. Querying metadata triggers the resampling of table statistics, which can be a burden if it's done too frequently. Turning off the automatic update of statistics on any metadata query is useful. We disable that on all our MySQL instances at my company.
I recall years ago when phpMyAdmin added queries to the INFORMATION_SCHEMA to discover metadata, it caused some large hosting providers to crash, because hundreds of users at a time were viewing their databases in phpMyAdmin.
I also ran into this when I developed the database component for the Zend Framework 1.0. I found a workaround: use
DESCRIBE
instead of INFORMATION_SCHEMA queries. DESCRIBE doesn't trigger the update of table statistics.If the client is really running the literal statement ANALYZE TABLE, and you can't change the client code (which would be the obvious solution), my only other suggestion is to run an SQL proxy in between the client and the database, and filter out ANALYZE TABLE statements.
For example, here's a blog describing use of ProxySQL to filter out offending queries:
https://www.percona.com/blog/2016/08/23/how-to-stop-an-offending-query-with-proxysql/