Please review my answer to this recent question. I believe the circumstances are identical.
Do not change your MySQL configuration at this point, as MySQL is not the problem -- it's only a symptom of the problem... which is that you appear to have a system with a small amount of memory and zero swap space.
Your server is not crashing "because" memory can't be allocated for the buffer pool. Your server is crashing... and then is unable to subsequently restart due to the unavailability of system memory. All of the memory configured for the InnoDB buffer pool is requested from the system at mysql startup.
When you see this log message...
120926 08:00:51 mysqld_safe Number of processes running now: 0
...your server has already died. If it hasn't logged anything prior to this, it's not going to log anything about the first crash. The subsequent logs are from after the automatic attempt to restart.
Check your syslog and you should find messages where the kernel went looking for processes to kill due to an extreme out-of-memory condition.
Step 1 would probably be to add some swap space and/or allocating RAM if at all possible.
If that isn't possible, you might actually consider decreasing the innodb-buffer-pool size in your configuration. (I never thought I'd actually hear myself say that). As long as your database is small and your traffic is light, you may not need a buffer pool that large... and since the InnoDB Buffer Pool memory is all allocated at startup whether it's needed or not, this would free up some of your system's memory for whatever else is demanding it. (The 75% to 80%-of-total-RAM recommendation for sizing the buffer pool is only true if the whole server is dedicated to MySQL.)
Step 2 will be to review Apache's forking model and what you might need to do differently in the configuration to prevent it from overwhelming your server. It is pretty likely that uncontrolled growth in quantity or memory requirements of the Apache child processes is starting a cascade of events, resulting in the kernel killing MySQL to try to avoid a complete crash of the entire server.
Depending on how much flexibility you have, you might even consider two separate virtual machines for Apache and MySQL.
First, from looking at your existing indexes and comparing them to your example query, you are missing an index on just accountid
. The way MySQL handles indexes is left-most, meaning you can have a composite index like this:
KEY `contactaccounts` (`contactid`,`accountid`,`journalcode`) USING BTREE,
and run a query that looks for contactid
and the contactaccounts
index would be a potential index*. However doing a query on accountid
will not utilize the index, because accountid
is not the left-most column.
If you never search for contactid
without an accountid
, I would create the index like this:
DROP INDEX `contactaccounts` ON `tbl_sessions_2012`;
CREATE INDEX `accountscontact` ON `tbl_sessions_2012` (`accountid`,`contactid`,`journalcode`);
Now, analyzing your other indexes, the first of each set is redundant using the left-most rule, and can be dropped in favor of the second:
KEY `contactaccounts` (`contactid`,`accountid`,`journalcode`) USING BTREE
KEY `contactaccountloginmonths` (`contactid`,`accountid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `contactaccountcollections` (`contactid`,`accountcollectionid`,`journalcode`) USING BTREE
KEY `contactaccountcollectionloginmonths` (`contactid`,`accountcollectionid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `organisationaccounts` (`organisationid`,`accountid`) USING BTREE
KEY `organisationaccountloginmonths` (`organisationid`,`accountid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
KEY `organisationaccountcollection` (`organisationid`,`accountcollectionid`,`journalcode`) USING BTREE
KEY `organisationaccountcollectionsloginmonths` (`organisationid`,`accountcollectionid`,`journalcode`,`logintypeid`,`actionTime`) USING BTREE
By having the duplicates you are using up a lot of space on indexes.
* I say potential because there is still the possibility that your index will not be used due to the amount of rows MySQL has to scan through. As your account_id IN ()
statement grows, MySQL will determine that it's faster just to do a full table scan regardless of the index.
Best Answer
I would not use any of those options: I would properly split out each part of the log entry, putting each in its own column of appropriate maximum length. This is more work of course as you need to break the data down for insert if you are getting it as a single string (it'll also consume a bit more disk space, probably) but it will greatly increase the analysis you can do on the logs later.
Also define useful indexes. At very least something on that ordering column and/or if there is a date+time in those logs there should be an index covering that.
If you are just storing logs as blocks of text per line then there is very little point pushing them into the database: just keep them in their original flat files where you can use
grep
and other such tools. If you want to take advantage of the database's features at all, create a table optimised for making best use of the DB instead of optimising solely for easiest insert convenience.You could also break down the consistent parts of query strings, form parts, and cookies (I'm assuming these are HTTP(S) logs) into their own sub-tables, but unlike separating basic log variables that is probably going to be considerable overkill.