As the Google Cloud SQL is a managed instance of MySQL under the hood, limited to an absolute max on 2nd generation machines of 100GB of RAM, should I be compressing some of my table and index data ? I am designing a global system so eventually the working set will probably exceed the 100GB.
Previously i've achieved performance improvements by compressing tables and indexes in SQL Server and Oracle systems that were memory bound.
I'm new to GCP, and consequently MySQL. Would the usage of compression still apply in this case? are there any such functions or procedures (like compression_estimate in SQL Server) I can run to assess if a table or index will benefit from the compression?
Best Answer
Do you really need compressed? A multi-terabyte datasets may happily live on 100G-RAM machine. What is your data like? Text? Blobs? Lots of rows? Care to share the
CREATE TABLEs
?I don't have any details on Cloud SQL. But...
Compression is usually not the first place to look when thinking of Scaling. It has overhead (usually CPU, sometimes complexity).
"Scaling" usually involves Sharding or some other "distributed" technique.
RAM is not usually the limiting factor in Scaling; it is disk space. Or it could be any of several metrics - IOPs, network bandwidth, etc.
Scaling...