I'm trying to resolve an issue where one particular user hammers our mysql 5.7 database every 2 hours, for a few minutes. Not evil, but I'm suspicious it may be inefficient (or it may not be and it's just the way it is).
The mysql.general_log table gives me everything I want, but I prefer the ease of a file. The output to the file specified with general_log_file works fine, but doesn't include the user_host data that the database table has.
I've read 5.4.3 The General Query Log, googled, but can't seem to find any hint of this.
Is there a format I can set that will include user_host into the file system log file? I must of missed something obvious, why wouldn't the output be exactly the same, file or table?
example of what I see in the file:
2018-03-01T15:41:45.450491Z 137300 Query SHOW COLLATION
2018-03-01T15:41:45.451280Z 137299 Query SET NAMES latin1
2018-03-01T15:41:45.451425Z 137299 Query SET character_set_results = NULL
2018-03-01T15:41:45.451554Z 137299 Query SET autocommit=1
PS
My workaround is
mysql --user etc... -e "select user_host,argument from general_log where user_host like 'billybob%'" > junk
which isn't as nice as (IMO)
grep billybob log_file
edit:
2018-03-01T15:41:45.450317Z 137300 Query /* mysql-connector-java-5.1.16 ( Revision: ${bzr.revision-id} ) */SELECT
@@session.auto_increment_increment
Best Answer
IDEA #1 : USE THE SLOW LOG !!!
The Slow Log ??? Yea, I know what I said : USE THE SLOW LOG !!!
How can you ???
With the slow log being written to a text file, the header info for each slow log entry has the user@host right on the first line of each entry (ignore the
# Time:
line since one or more slow log entries can be saves within a single time window)For example, the post Mysql slow query log always include "# Time:" has this example slow log:
In your case, you could probably run
which shows for each occurrence of
billybob
billybob
In order to make the slow log behave like a general log you must run
Then, everything and its grandmother will be written to the slow log.
When you are done with your analysis, set the long_query_time back to its old value:
Don't forget to truncate the slow log after the analysis by doing
IDEA #2 : Poll the Processlist
The problem with using either the slow log or the general log is that entry will not entry the log until the action is done. In other words, if a query takes 10 min to run, the entry will not land in the log file until the command is done executing 10 minutes after it started.
Wouldn't it be nice to catch the bad queries in the act ???
You should use pt-query-digest. Even with queries that are not done, a histogram will be generated with the pattern of queries that have executed or are still in progress.
Rather than write up an example, please see my posts where I discuss using pt-query-digest
Dec 28, 2011
: Is there a tool like Microsoft's "SQL Server Profiler" for MySQL?Nov 24, 2011
": MySQL general query log performance effectsGIVE IT A TRY !!!