Mysql – Worth Splitting MySQL Server Into 2 MySQL Servers To Increase Performance

MySQL

Context
I am using an AWS RDS server to host my MySQL DB.

I work for a footfall company, we have our system installed all over the UK and every time a visitor is seen we write to the DB. We also provide a portal to our customers allowing them to view all the data as readable reports. Customers only use the portal once a week on average so there aren't anywhere near as many reads on the DB as writes.

RDS Stats
RDS data

The graph above shows the read and write IOPS as you can see the write IOPS is loads higher then read IOPS.

Question
On the portal we don't show any of todays stats and at midnight every night we run a load of cron jobs which summarises all the data.

I am thinking of separating my RDS Server into 2. One would contain all the summarised data which the portal would read from to show the stats. The other would contain all of today's data and it would constantly be written to. At midnight the cronjobs would them summarise the data from the DB containing today's data and store the summarised data in the portal DB.

The idea behind this is to make the portal as fast as possible by removing all the writes.

I am no MySQL guru so this may have no effect, the RDS server can handle all the writes and reads to the DB with the current specs. If the current RDS server can handle the requests I am presuming this won't have an effect?

We are currently just trying to optimise our performance so any advice is really helpful.

Best Answer

Your RDS is coping with your write load but if you want more scalability, separating the read and write load is a good idea: they have different work patterns and probably have conflicting performance tuning needs.

Once you have decided to split then you are free to choose a different product to handle the incoming raw data for the daily write load. Why not choose a storage medium that is more optimised for writes? Consider sending the daily the writes to a DynamoDB dynamodb table.
As your cron jobs are already written then you know how you will query the raw data so this can help you structure your DynamoDB table. You can autoscale dynamoDB to cope with changing or bursty loads.

You can also create a TTL on the raw data so that as time passes and the summary data is more relevant the raw data is deleted, or even send the older raw data to S3 for cheaper storage once it is no longer 'hot'.