Problem using resize2fs too often

ext3resize2fs

I have a partition which contains MySQL data which is constantly growing. My LVM PV has precious little free space remaining and therefore I find I'm frequently adding additional space to my /var partition using lvextend and resize2fs in smallish increments (250-500 MB at a time) so as not to give too much space to /var and then be unable to allocate those PEs to other partitions should I need to later.

I'm concerned about reaching some limit or causing a problem by calling resize2fs too often to grow this filesystem. Is there a limit to how often resize2fs can be used to grow an Ext3 filesystem? Is it better to do one large Ext3 resize rather than many small ones? Does resizing using resize2fs too often carry a potential for problems or data loss?

Best Answer

Beyond the wear and tear on the HDDs I can't see any reason why this would be dangerous. I've never come across a EXT3/EXT4 parameter that limits the amount of times you can do this. There isn't any counter I've seen either.

In looking through the output from tune2fs I see nothing that I would find alarming which would lead me to believe that performing many resizes would be harmful to the filesystem or the device, beyond the wear and tear.

Example

$ sudo tune2fs -l /dev/mapper/vg_grinchy-lv_root
tune2fs 1.41.12 (17-May-2010)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX
Filesystem magic number:  0x1234
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              3276800
Block count:              13107200
Reserved block count:     655360
Free blocks:              5842058
Free inodes:              2651019
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1020
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Sat Dec 18 19:05:48 2010
Last mount time:          Mon Dec  2 09:15:34 2013
Last write time:          Thu Nov 21 01:06:03 2013
Mount count:              4
Maximum mount count:      -1
Last checked:             Thu Nov 21 01:06:03 2013
Check interval:           0 (<none>)
Lifetime writes:          930 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       1973835
Default directory hash:   half_md4
Directory Hash Seed:      74e66905-d09a-XXXX-XXXX-XXXXXXXXXXXX
Journal backup:           inode blocks

dumpe2fs

You can also poke at the EXT3/EXT4 filesystems using dumpe2fs which essentially shows the same info as tune2fs. The output from that command is too much to include here, mainly because it includes information about the groups of inodes within the filesystem. But when I went through the output, again I saw no mention of any counters that were inherent within the EXT3/EXT4 filesystems.

Related Question