I have an application server (Play Framework), webserver (Caddy), and database server (MariaDB with InnoDB) that are running on a single machine for now.
The database is used in two ways:
-
client Python code that makes highly specific SELECTs (over an SSH tunnel).
-
the web/application server, which performs occasional INSERTs and UPDATEs, but mostly SELECTs from large tables each time a page loads.
Is there a simple way to enable (and force) query caching for the webserver but not for the client code? My thought was to set query_cache_type=DEMAND
and use SQL_CACHE
on the webserver, but it's not ideal. Perhaps I can force JDBC to use SQL_CACHE
somehow?
Clarification:
The unfortunate technical issue with adding SQL_CACHE
to every query is that the web application queries using Slick and a JDBC backend. The integration with Slick is tight so I can't rewrite the application layer, and I don't know how to force JDBC to append SQL_CACHE
.
Best Answer
If there is a lot of write activity, the Query Cache is virtually useless -- because all QC entries for a table are purged when any modification happens to the table.
If the QC is useful, using
query_cache_type=DEMAND
and manually sayingSQL_CACHE
orSQL_NO_CACHE
on everySELECT
may be optimal. This keeps the rare, bulky, queries from cluttering the cache, while allowing the 'good' ones to make use of the QC. Note: This has nothing to do with the connector (JDBC, etc), only with theSELECT
statements.Yes,
DEMAND
withoutSQL_CACHE
leads to not using the QC. I agree that this is "not ideal" since it depends on un-obvious rules about the QC. Still, it may be your best solution, other than...If a query is slow slow that it matters, we should look critically at the query. The speed-up may be as simple as adding a 'composite' index, or some seemingly minor tweak to the query such as not hiding an indexed column in a function call (eg,
DATE(dt) = ...
).