I think I got the answer:
You need to put those options under the [mysqld]
section
[mysqld]
slow-query-log=1
slow-query-log-file=/var/logs/my.slow.log
long_query_time=1
and restart mysql
UPDATE 2013-03-05 16:36 EST
I don't know why this is still happening, but please try this:
service mysql stop
rm -f /var/logs/my.slow.log
touch /var/logs/my.slow.log
chown mysql:mysql /var/logs/my.slow.log
service mysql start
then run SELECT SLEEP(10);
and see if it lands in /var/logs/my.slow.log
According to your current explanation indexes are not going to help much (if at all) with your current query.
So there are 3.25M rows with a project key.
That's also the total number of rows, so this predicate is true
for (almost) every row ... and not selective at all. But there is no useful index for the jsonb
column "references"
. Including it in the btree index on ("createdAt", "references", type)
is just pointless.
Even if you had a generally more useful GIN index on "reference"
like:
CREATE INDEX stats_references_gix ON stats USING gin ("references");
... Postgres would still have no useful statistics about individual keys inside the jsonb
column.
There are only 5 distinct values for the type
Your query selects all of one type and an unknown fraction of another type. That's an estimated 20 - 40 % of all rows. A sequential scan is most certainly going to be fastest plan. Indexes start to make sense for around 5 % of all rows or less.
To test, you can force a possible index by setting for debugging purposes in your session:
SET enable_seqscan = off;
Reset with:
RESET enable_seqscan;
You'll see slower queries ...
You group by project values:
GROUP BY "references"->> 'project'
And:
There are exactly 400,000 distinct values for the project reference.
That's 8 rows per project on average. Depending on value frequencies we still have to retrieve an estimated 3 - 20 % of all rows if we only picked min and max per project in a LATERAL subquery ...
Try this index, it makes more sense than what you have now:
CREATE INDEX stats_special_idx ON stats (type, ("references" ->> 'project'), "createdAt")
WHERE "references" ? 'project';
Postgres might still fall back to a sequential scan ...
More might be done with a normalized schema / more selective criteria / a smarter query that only picks min and max "createdAt"
...
Query
I would write your query like this:
SELECT EXTRACT(EPOCH FROM (MAX("createdAt") - MIN("createdAt")))
FROM stats
WHERE "references" ? 'project'
AND (type = 'event1' OR
type = 'event2'
AND "createdAt" >= '2015-11-02 00:00:00+08:00' -- I guess you want this
AND "createdAt" < '2015-12-04 00:00:00+08:00'
)
GROUP BY "references"->> 'project'; -- don't cast
Notes
Don't cast here:
stats."references"::jsonb ? 'project'
The column is jsonb
already, you gain nothing. If the predicate was selective, index usage might be prohibited by the cast.
Your predicates on "createdAt"
are probably incorrect at lower and upper bound. To include whole days consider my suggested alternative.
references
is a reserved word, so you have to always double-quote it. Don't use it as identifier. Similar for double-quoted CaMeL-case names like "createdAt"
either. Allowed, but error prone, needless complication.
type
type character varying(255) NOT NULL,
The type is a simple string no longer than 50 characters.
There are only 5 distinct values for the type field, this would probably be no more than a few hundred in production.
None of this seems to make sense.
varchar(255)
in itself hardly ever makes any sense. 255 characters is an arbitrary limit without significance in Postgres.
- If it's no longer than 50 chars, then the limit of 255 makes even less sense.
- In a properly normalized design you would have a small
integer
column type_id
(referencing a small type
table) which occupies only 4 bytes per row and makes indexes smaller and faster.
Ideally, you would have a project
table, listing all projects and another small integer FK column project_id
in stats
. Would make any such query faster. And for selective criteria, much faster queries would be possible - even without the suggested normalization. Along these lines:
Optimize GROUP BY query to retrieve latest record per user
Best Answer
Unfortunately, you are not going to get those queries from
mysql.slow_log
. Why ?There are two options evidently being used
According to MySQL Documentation (The Slow Query Log, Paragraph 9)
What this shows is that your log_throttle_queries_not_using_indexes is set so that SQL text is suppressed. You can verify this with
Since it is a global dynamic variable, you must login as
root@localhost
and runThis should expose new queries entering
mysql.slow_log
, but you will never see what those previous slow queries were.You must run this
If you see
FILE,TABLE
that means there is a text version of the slow query log. You will have to get your SysAdmin to get you the desired slow log info (Starbucks GiftCard might be required for this one. If the slow log file contains the same messages, take your Starbucks GitftCard back.)