I work for Tokutek. The answers here are mostly good. As Justin mentions, you need the right index, and your schema probably does not have the right index. I am happy to hear TokuDB was a little faster than InnoDB, but for table scans, assuming the table is not aged, it can go either way.
Here is a talk I gave on indexing that you may find helpful: http://www.youtube.com/watch?v=vaGAoK66ctM.
The first half is on indexing, the second half is a bit of a technical dive into fractal trees on a whiteboard. Hopefully this helps you with index design. I highly recommend understanding clustering secondary indexes, which TokuDB provides.
On other points. RolandoMySQLDBA is mostly correct on how InnoDB and TokuDB perform. Here is how to think about the performance of TokuDB. While the dataset fits in memory, TokuDB's fractal trees do not have any inherent advantage over InnoDB or other B-Tree based storage engines. The bottleneck, or rather the cliff, hits when data is big and does not fit in main memory. Where InnoDB's write performance, and write performance of other B-tree based storage engines, fall off a cliff, TokuDB's performance stays level. THAT is the indicator that you are getting something out of TokuDB. TokuDB will not take some well running existing system and supercharge its performance. TokuDB will take systems work well in memory but start to break down when falling out of memory, and ensures those systems perform well as data grows. That is what is going on with benchmarks that Percona shows, that is what is going on with iiBench (http://www.tokutek.com/resources/benchmarks/#iiBench).
Combine this write performance with TokuDB's compression, and suddenly clustering indexes, as explained in the indexing talk, become relatively cheaper. Maintaining more better indexes becomes cheaper. Use better indexes, a lot of I/O from queries may dissappear, and query throughput improves. This is how one can benefit from TokuDB.
The reason is that you don't want to have heavyweight tables both for the TokuDB engine and the InnoDB engine.
These two engines both need their resources. They are both very smart about managing resources inside their own scope, and yet they are completely ignorant about one another.
So while InnoDB can coordinate its I/O between multiple query executions, it is unaware that TokuDB may at the same time use the disk in some way.
In particular, the "After converting your tables to TokuDB, you should comment out the global options related to memory caches (ie: innodb_buffer_pool%)." relates to the convention that you provide InnoDB with most of the OS available memory. In such case little to no memory remains for TokuDB to operate with.
A situation where you would have large tables in both engines is undesired and would very probably lead to poor performance for both engines. It's just fine to leave some small, non-frequently accessed InnoDB tables with a small allocation of memory.
Best Answer
I've had the same experience. The solution is to rebuild the table via this command:
Replace
TOKUDB_SMALL
with your favourite compression algorithm.With InnoDB an
OPTIMIZE TABLE
command simply does a trivialALTER
. But in TokuDB it does not, and tablespace is not reclaimed. By forcing anALTER
withENGINE=TOKUDB
you're taking this outside TokuDB's hands, and let the MySQL server tell TokuDB to create a (really) new table.