This is a regression in kernel 4.0, causing conversion filters in balance to have no effect; it looks like all conversions are affected (not just single->raid1 or raid1->raid5). See a recent mailing list thread, where there's currently no official fix. If you're up to patching your kernel, there's an easy patch to apply as a temporary fix.
This is a known bug in v4.0. I sent in a patch [1] to revert the commit
that caused the regression, but it didn't get any response. You
could apply that or just revert 2f0810880f08 ("btrfs: delete chunk
allocation attemp when setting block group ro") to fix your problem for
now.
[1]: https://patchwork.kernel.org/patch/6238111/
A snapshot (in this sense) is a part of the filesystem. In btrfs terminology, it's a subvolume — it's one of the directory trees on the volume. It isn't in “archive form”. Making a snapshot of a subvolume creates a new subvolume which contains the data of the original volume at the date the snapshot was made. Subsequent writes to the original subvolume don't affect the snapshot and vice versa. All subvolumes are part of the same volume — they designate subsets (potentially overlapping) of the data in the volume.
The parts of the snapshot that haven't been modified in either subvolume share their storage. Creating a snapshot initially requires no storage except for the snapshot control data; the amount of storage increases over time as the content of the subvolumes diverge.
The most important property of snapshot creation is that it's atomic: it takes a picture of the data at a point in time. This is useful to make backups: if the backup program copies files from the live system, it might interact poorly with modifications to the files. For example, if a file is moved from directory A to directory B, but the backup program traversed B before the move and A after the move, the file wouldn't be included in the backup. Snapshots solve this problem: the file will be in A if the snapshot is made before the move and in B if it's made after, but either way it will be there. Then the backup program can copy from the snapshot to the external media.
Since the snapshot is on the same volume as the original, it's stored in the same way, e.g. it's encrypted if the volume is encrypted.
A snapshot reproduces the original directory tree, including permissions and all other metadata. So the permissions are the same as the original. In addition, users must be able to access the snapshot directory itself. If you don't want users to be able to access a snapshot at all, create it under a directory that they can't access (you can place the snapshot anywhere you want).
If you want to make a copy of the snapshot outside the filesystem, access or mount the snapshot then make a copy with your favorite program (cp
, rsync
, etc.). You can find sample commands in the btrfs wiki; see the manual page for a full reference.
Best Answer
TL;DR The metadata (if the btrfs is not suffering general low space condition) will automatically increase. In cases that no unallocated free space exists, the automatic increase is hembered. If, however, the data part of
btrfs
has been allocated more space than it needs, then it is possible to redistribute this. This is calledbalance
-ing in btrfs.Assuming that there is enough unallocated memory on the backing block device(s) of the
btrfs
, then the Metadata part of the filesystem allocates - just as assumed by the OP - automatically memory to increase/expand the metadata.Therefore, the answer is: Yes (provided there is not low memory/free space condition in the
btrfs
), then the metadata will get automatically increased, as such:(1) We have a look at some initial allocation setup of the btrfs (on a
40GB
device)(2) As can be seen, the allocated space in the filesystem to store Metadata is 1.55GiB, of which 1.33GiB, hence almost all is used (this might be a situation as occurring in the OP's case)
(3) We now provoke an increase of metadata to be added. To do so, we copy the /home folder using the
--reflink=always
option of thecp
command.(4) Since (as we assume there were lots of files in /home), which a lot of new data to the filesystem has been added, which because we used
--reflink
does use little to no additional space for the actual data, it uses the Copy-on-Write, mechanism. In short, mostly Metadata was added to the filesystem. We can have hence another lookAs can be seen, the space allocated for Metadata used in this
btrfs
has automatically increased expanded.Since this is so automatically, it normally goes undetected by the user. However, there are some cases, mostly those where the whole filesystem is already pretty much filled up. In those cases,
btrfs
may begin to "stutter" and fail to automatically increase the allocated space for the Metadata. The reason would be, for example, that all the space has already been allocated to the parts (Data, System, Metadata, GlobalReserve). Confusingly, it could be yet the case that there is apparent space. An example would be this output:As it can be seen, the system all the
40GiB
, yet the allocation is somewhat offbalance
, since while there is still space for the new files' data, the Metadata (as in the OP case) is low. The automatic allocation of memory for the devices backing thebtrfs
filesystem is not anymore possible (simply add up the totals of the allocation, 38.12G+1.55G+..~= 40GiB).Since there is however excess free space that was allocated to the
data
part of the filesystem, it can now be useful, necessary to balance the btrfs. Balance would mean to redistribute the already allocated space.In the case of the OP, it may be assumed that, for some reason, an imbalance between the different parts of the
btrfs
allocation has occurred.Unfortunately, the simple command
sudo btrfs balance -dusage=0
, which in principle should search empty blocks (allocated for data) and put them to better user (that would be the almost depleted space for Metadata), may fail, because no completely empty data blocks can be found.The
btrfs
developers recommend to hence successively increase the usage limit of "when data blocks should be rearranged to reclaim space"Hence, if the result of
is showing no relocation, one should do some
The other answer has hinted on the influence of the
btrfs
nodesize, which influences somewhat how quickly metadata will increase. The nodesize is (as mentioned in the other answer) only set once atmkfs.btrfs
filesystem creation time. In theory, one could reduce the size of Metadata if it was possible to change to a lower value for the nodesize, if that was possible (it is not!). The nodesize, however, will not be able to help expand or increase the metadata space allocated in any way. Instead, it might have only helped to conserve space in the first place. A smaller nodesize, is not guaranteed however to reduce the metadata size. Indeed, some cases might show that larger nodesizes reduce the tree-traversal length of btrfs, as notes can contain more "links".