Whatever you do, do not shut down that mongod
process until you back up your data (see below). There are missing files in that database directory, and I suspect they have been manually deleted at the OS level. The data files should not have any gaps in them, ever. In other words you should have files starting at myBase.0
all the way up to myBase.37
, there should be no gaps in the numbers.
To explain, if you delete the files using rm
or similar at the OS level it will succeed, the OS allows it, but because the mongod
process that is running has an open file handle to the files they will not actually be deleted by the operating system until you stop the process.
Here's an example of what the lsof
command shows for a normal data file called foo.0:
mongod 5786 adam mem REG 9,0 67108864
805306654 /data/db/test0/foo.0
And here is what it looks like when you have manually deleted the file:
mongod 5786 adam 24u REG 9,0 67108864
805306654 /data/db/test0/foo.0 (deleted)
From within MongoDB that file still exists and is accessible, I can query, run db.stats()
etc. successfully, but if that mongod process is restarted the file will be removed and the data is at that point essentially gone (barring efforts to undelete at the filesystem level).
So, what should you do? Well, the first thing is to make sure you have a copy of the data before shutting down that process and losing it. To do that you have a couple of options:
- If this is a node in a replica set (even single), add a new secondary set member and let it sync - that will still succeed and then you will have a fully populated version of the data ready to take over on that secondary. (Note: If this is not a replica set you can't turn it into one without a process restart, and that would delete the data - my recommendation is to always run as a replica set, even a single node for anything in production)
- Run
mongodump
to dump the data out somewhere else before it gets deleted. This won't be fast, and you will need plenty of space, but at least it will give you an easily restorable version of your data
A repair on the database might work, but only if you have enough space to accommodate 2x the data plus index size on that disk. It must be a repair command, not a restart with --repair
because the restart would cause the files to be deleted.
Finally, you need to figure out what is deleting these files and stop it - is there a cron job or other process that is automatically deleting large files (the data files will usually be 2GB) over a certain age or similar? I've seen things like that before wipe out MongoDB data files with similar results.
MongoDB (for MMAP storage engine) will allocate 3 journal files by default at 1GiB each. That's where your journal related space usage is coming from, but it will not grow unless you have a very high insert rate.
You can start with the smallfiles option and reduce the size to 3 x 128MiB if you wish, but be aware that your data files will also be reduced (to 512MiB each) so there will be many more of them and they will need to be allocated more often when adding data.
As for whether to increase your storage, that depends on how much data you intend to add to the database - it will need to allocate new data files to store any data you insert, so it really is dependent on your planned usage as to whether 10GiB is enough or not.
Best Answer
You could do it using GridFS.
This is best approach.