InnoDB has Limitations on row lengths and BLOBs. According to the MySQL Documentation:
The maximum row length, except for variable-length columns (VARBINARY,
VARCHAR, BLOB and TEXT), is slightly less than half of a database
page. That is, the maximum row length is about 8000 bytes. LONGBLOB
and LONGTEXT columns must be less than 4GB, and the total row length,
including BLOB and TEXT columns, must be less than 4GB.
If a row is less than half a page long, all of it is stored locally
within the page. If it exceeds half a page, variable-length columns
are chosen for external off-page storage until the row fits within
half a page, as described in Section 14.3.12.2, “File Space Management”.
Although InnoDB supports row sizes larger than 65,535 bytes
internally, MySQL itself imposes a row-size limit of 65,535 for the
combined size of all columns:
mysql> CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),
-> c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
-> f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB; ERROR 1118 (42000):
Row size too large. The maximum row size for the used table
type, not counting BLOBs, is 65535. You have to change some columns to
TEXT or BLOBs
See Section E.10.4, “Limits on Table Column Count and Row Size”.
You are going to have to recreate the mysqldump so that the Storage Engine is Specified. Perhaps just drop the --compatible=mssql
from the mysqldump command. The end result is the that the table will remain a MyISAM table when being imported into MariaDB.
This is just a guess but look at the error message you posted. If a BLOB prefix is 768 bytes and you have 10 BLOBs, that 7680 bytes. That leaves you with 320 bytes. If the remaining datatypes exceed 320 bytes, then it is impossible to convert to InnoDB.
Since a TEXT column is the variant of a BLOB, converting to TEXT does nothing since the storage requirements for TEXT and BLOB field are identical.
I addressed this issue 1.5 years ago : innodb_file_format Barracuda
Just from reading the documentation, I was able to figure out mathematically that the smaller you make the key_block_size, the less extra space you need to add to the InnoDB Buffer Pool.
Here is the chart I made up in that earlier post
- You have a DB Server with a 8G Buffer Pool
- You ran compression with
key_block_size=8
8
is 50.00%
of 16
50.00%
of 8G
is 4G
- raise
innodb_buffer_pool_size
to 12G
(8G
+ 4G
)
- You ran compression with
key_block_size=4
4
is 25.00%
of 16
25.00%
of 8G
is 2G
- raise
innodb_buffer_pool_size
to 10G
(8G
+ 2G
)
- You ran compression with
key_block_size=2
2
is 12.50%
of 16
12.50%
of 8G
is 1G
- raise
innodb_buffer_pool_size
to 9G
(8G
+ 1G
)
- You ran compression with
key_block_size=1
1
is 06.25%
of 16
06.25%
of 8G
is 0.5G
(512M
)
- raise
innodb_buffer_pool_size
to 8704M
(8G
(8192M
) + 512M
)
Given the following
- BPS = Buffer Pool Size
- KBS = key_block_size
- BPSFC = Buffer Pool Size For Compression
The ideal size for the Buffer Pool in a perfect world would be
BPSFC = BPS * (16 + KBS) / 16
From this formula, less is more. The smaller KBS is, the smaller the increase in BPS needs to be. Additoinally, it would appear that the CPU would worker a little harder in response to tighter compression. How ?
- For KBS=8, evicting one uncompressed page leaves room for 02 compressed pages
- For KBS=4, evicting one uncompressed page leaves room for 04 compressed pages
- For KBS=2, evicting one uncompressed page leaves room for 08 compressed pages
- For KBS=1, evicting one uncompressed page leaves room for 16 compressed pages
- For KBS=X, evicting one uncompressed page leaves room for 16/X compressed pages
I can see the CPU working harder to introduce more compressed pages with tighter compression. This could result in evicting LRU compressed pages more often.
If this is the case, it would appear that making the BPS even bigger than the suggested values from these formulas would give CPU some relief as well as reduce the rate of the eviction of compressed pages from the Buffer Pool.
EXAMPLE: From my first chart, an 8GB Buffer Pool would need to be 12GB for key_block_size=8. Making the Buffer Pool larger than 12GB would relieve the CPU of a lot of this heavy lifting (in the form of evicting uncompressed pages). Perhaps doubling the Buffer Pool Size may be need to be explored.
As for your question
will enabling compression on tables without changing the buffer pool size hurt or benefit the buffer pool hit rate?
I would say it would hurt, no such much because of the hit rate against the compressed pages, but because of the additional house cleaning in terms of the management of uncompressed pages and the CPU usage required for it. This would lower InnoDB performance no matter how much tuning the rest of the Storage Engine gets.
Best Answer
I tried different methods and read many articles. The standard way to do so is
However, this will leave
KEY_BLOCK_SIZE
on all keys.Some say this is just the
SHOW CREATE TABLE
display, asKEY_BLOCK_SIZE=8
standards for the largest block size for the keys, but the value is exactly that of theCOMPRESSED
table. If the table was created withKEY_BLOCK_SIZE=1
, keys will haveKEY_BLOCK_SIZE=1
after uncompression.Some, on the other hand, say this bug has been fixed in MySQL 8, but I confirm it exists in MariaDB 10.5.9.
VERDICT
ALTER
is a clean way, but I favourINSERT INTO SELECT
for two reasons:INSERT
writes the compressed pages on the disk.