divided linked to some info that explains the basic issue (there's performance differences), but it's not simple enough to say that one's always better than the other. (otherwise, there'd be no reason to have both.) Also, in MyISM, the 64k max size for VARCHAR isn't per field -- it's per record.
Basically, there's 4 ways to store strings in database records:
- fixed length
- C-style strings (marked with a NULL or similar character at the end of the string)
- Pascal style strings (a few bytes to indicate length, then the string)
- Pointers (store the string somewhere else)
MyISM uses something similar to #3 for VARCHAR, and a hybrid approach for TEXT where it stores the beginning of the string in the record, then rest of the string somewhere else.
InnoDB is similar for VARCHAR, but stores the complete TEXT field outside of the record.
With 1&4, the stuff in the record is always the same length, so it's easier to skip over if you don't need the string, but need stuff after it. Both #2 and #3 aren't too bad for short strings ... #2 has to keep looking for the marker, while #3 can skip ahead ... as the strings get longer, #2 gets worse for this particular use case.
If you actually need to read the string, #4 is slower, as you have to read the record, then read the string which might be stored elsewhere on the disk, depending on just how that database handles it. #1 is always pretty straightforward, and again you run into similar issues where for #2 gets worse the longer the string is, while #3 is a little worse than #2 for very small strings, but better as it gets longer.
Then there's storage requirements ... #1 is always a fixed length, so it might have bloat if most strings aren't the max length. #2 has 1 extra byte; #3 typically has 2 extra bytes if max length = 255, 4 extra bytes if a 64k max. #4 has the pointer length, plus the rules for #3 typically.
For the specific implementations within MySQL 5.1, the docs for MyISM state:
- Support for a true VARCHAR type; a VARCHAR column starts with a length stored in one or two bytes.
- Tables with VARCHAR columns may have fixed or dynamic row length.
- The sum of the lengths of the VARCHAR and CHAR columns in a table may be up to 64KB.
While for InnoDB :
- The variable-length part of the record header contains a bit vector for indicating NULL columns. If the number of columns in the index that can be NULL is N, the bit vector occupies CEILING(N/8) bytes. (For example, if there are anywhere from 9 to 15 columns that can be NULL, the bit vector uses two bytes.) Columns that are NULL do not occupy space other than the bit in this vector. The variable-length part of the header also contains the lengths of variable-length columns. Each length takes one or two bytes, depending on the maximum length of the column. If all columns in the index are NOT NULL and have a fixed length, the record header has no variable-length part.
- For each non-NULL variable-length field, the record header contains the length of the column in one or two bytes. Two bytes will only be needed if part of the column is stored externally in overflow pages or the maximum length exceeds 255 bytes and the actual length exceeds 127 bytes. For an externally stored column, the two-byte length indicates the length of the internally stored part plus the 20-byte pointer to the externally stored part. The internal part is 768 bytes, so the length is 768+20. The 20-byte pointer stores the true length of the column.
...
as with so many other things when dealing with databases, if you're not sure what's best for your needs, try benchmarking it with similar data & usage, and see how they behave.
No, he's not right. If a block is not in the InnoDB cache, then it has to be fetched, which means it will come either from disk or from the ZFS cache, at which point two copies of it exist in main memory. If you use that block, it will come out of the InnoDB cache. If you write that block, it will go from the InnoDB cache to the disk. The ZFS cache is just a helpless spectator in this scenario.
However, if your issue is INSERT
performance, it's unlikely to be related to this unless your system is under a very heavy load... is it? From the docs:
If an index record should be inserted
into a nonunique secondary index,
InnoDB checks whether the secondary
index page is in the buffer pool. If
that is the case, InnoDB does the
insertion directly to the index page.
If the index page is not found in the
buffer pool, InnoDB inserts the record
to a special insert buffer structure.
The insert buffer is kept so small
that it fits entirely in the buffer
pool, and insertions can be done very
fast.
Periodically, the insert buffer is
merged into the secondary index trees
in the database. Often it is possible
to merge several insertions into the
same page of the index tree, saving
disk I/O operations. It has been
measured that the insert buffer can
speed up insertions into a table up to
15 times.
The insert buffer merging may continue
to happen after the inserting
transaction has been committed. In
fact, it may continue to happen after
a server shutdown and restart (see
Section 13.2.6.2, “Forcing InnoDB
Recovery”).
Insert buffer merging may take many
hours when many secondary indexes must
be updated and many rows have been
inserted
SHOW ENGINE INNODB STATUS should show you exactly what the DB is waiting on. Also see if strace can help.
Best Answer
Rather than retype a lot information, here are some posts I made in the DBA StackExchange before on how to move MyISAM tables to different folder and disks