The system is starting to complain about the number of comments.
I'd be interested in seeing the statements as they are sent to the server - i.e. the fully generated SQL statements, i.e. with real values, not ? parameter place holders.
The statement that appears to be causing the issue is this one.
INSERT INTO `inventoryitems` VALUES (DEFAULT, 4, '595', null, 5390001, 5, 1, 1, '', -1, 0, -1, '')
Now, inventory items has a FOREIGN KEY constraint
CONSTRAINT FK_inventoryitems_1
FOREIGN KEY (characterid
) REFERENCES characters
(id
) ON DELETE CASCADE
What is the result of the following query
SELECT COUNT(*) FROM characters WHERE characters.id = 595;
The values here are also of interest.
ps = con.prepareStatement("INSERT INTO `inventoryitems` VALUES (DEFAULT, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)", Statement.RETURN_GENERATED_KEYS);
pse = con.prepareStatement("INSERT INTO `inventoryequipment` VALUES (DEFAULT, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)");
[EDIT]
I'm starting to get some insight into what may be going on. You appear to be using some sort of framework for your database access (that's the reason why your SQL has '595' for an integer insert instead of 595.
It's possible that the transaction isolation level is being set by the framework to something other than read committed (see here). Try and ensure that your transactions use read committed - set the transaction level. Try adding this code to yours after the line
Connection con = DatabaseConnection.getConnection();
con.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED); // Added line.
Also, check out this and the rest of the thread.
[EDIT] in response to "How do i check it? , can you guide me?"
What I meant was for you to add the line above to your code and rerun your system to the point where you got your original error. Do you still get an error? If you do, then is it different to the original error? If it is different, then post the error text here. HTH.
Could you take a dump of your database and put it online? You can anonymise anything sensitive. Also, zip up your Java code and I'll run that here against your schema. My Java's a bit rusty, but running it on a different system might give us some clues! :-)
[EDIT] in response to OP's comment "can you give me an example of indexes [code example] ?".
See here (you can also use the KEY keyword as a synonym for INDEX - see the docco here. You should add to your inventoryitems CREATE TABLE statement lines such as
INDEX `ii_type_ix` (type),
INDEX `ii_account_id_ix` (accountid),
As for changing your Java code - I'm not 100% sure, but the problem lies with the database so your efforts should be concentrated there. Where you can change the code is in varying the order of your queries (but keeping the original functionality).
YOUR QUERY
SELECT post.postid, post.attach FROM newbb_innopost AS post WHERE post.threadid = 51506;
At first glance, that query should only touches 1.1597% (62510 out of 5390146) of the table. It should be fast given the key distribution of threadid 51506.
REALITY CHECK
No matter which version of MySQL (Oracle, Percona, MariaDB) you use, none of them can fight to one enemy they all have in common : The InnoDB Architecture.
CLUSTERED INDEX
Please keep in mind that the each threadid entry has a primary key attached. This means that when you read from the index, it must do a primary key lookup within the ClusteredIndex (internally named gen_clust_index). In the ClusteredIndex, each InnoDB page contains both data and PRIMARY KEY index info. See my post Best of MyISAM and InnoDB for more info.
REDUNDANT INDEXES
You have a lot of clutter in the table because some indexes have the same leading columns. MySQL and InnoDB has to navigate through the index clutter to get to needed BTREE nodes. You should reduced that clutter by running the following:
ALTER TABLE newbb_innopost
DROP INDEX threadid,
DROP INDEX threadid_2,
DROP INDEX threadid_visible_dateline,
ADD INDEX threadid_visible_dateline_index (`threadid`,`visible`,`dateline`,`userid`)
;
Why strip down these indexes ?
- The first three indexes start with threadid
threadid_2
and threadid_visible_dateline
start with the same three columns
threadid_visible_dateline
does not need postid since it's the PRIMARY KEY and it's embedded
BUFFER CACHING
The InnoDB Buffer Pool caches data and index pages. MyISAM only caches index pages.
Just in this area alone, MyISAM does not waste time caching data. That's because it's not designed to cache data. InnoDB caches every data page and index page (and its grandmother) it touches. If your InnoDB Buffer Pool is too small, you could be caching pages, invalidating pages, and removing pages all in one query.
TABLE LAYOUT
You could shave of some space from the row by considering importthreadid
and importpostid
. You have them as BIGINTs. They take up 16 bytes in the ClusteredIndex per row.
You should run this
SELECT importthreadid,importpostid FROM newbb_innopost PROCEDURE ANALYSE();
This will recommend what data types these columns should be for the given dataset.
CONCLUSION
MyISAM has a lot less to contend with than InnoDB, especially in the area of caching.
While you revealed the amount of RAM (32GB
) and the version of MySQL (Server version: 10.0.12-MariaDB-1~trusty-wsrep-log mariadb.org binary distribution, wsrep_25.10.r4002
), there are still other pieces to this puzzle you have not revealed
- The InnoDB settings
- The Number of Cores
- Other settings from
my.cnf
If you can add these things to the question, I can further elaborate.
UPDATE 2014-08-28 11:27 EDT
You should increase threading
innodb_read_io_threads = 64
innodb_write_io_threads = 16
innodb_log_buffer_size = 256M
I would consider disabling the query cache (See my recent post Why query_cache_type is disabled by default start from MySQL 5.6?)
query_cache_size = 0
I would preserve the Buffer Pool
innodb_buffer_pool_dump_at_shutdown=1
innodb_buffer_pool_load_at_startup=1
Increase purge threads (if you do DML on multiple tables)
innodb_purge_threads = 4
GIVE IT A TRY !!!
Best Answer
If each questions having exactly 4 answers the below query will help you. Same way you can include the status field.
SELECT qst.question, SUBSTRING_INDEX( group_concat( ans.answer ) , ',', 1 ) AS Ans1, SUBSTRING_INDEX( SUBSTRING_INDEX( group_concat( ans.answer ) , ',', 2 ) , ',', -1 ) AS Ans2, SUBSTRING_INDEX( SUBSTRING_INDEX( group_concat( ans.answer ) , ',', -2 ) , ',', 1 ) AS Ans3, SUBSTRING_INDEX( group_concat( ans.answer ) , ',', -1 ) AS Ans4 FROM
question
qst, answers ans WHERE ans.q_id = qst.id GROUP BY ans.q_id