MongoDB aaS – administrate volumes/disk size – consequences of full volume – new file allocation failure

disk-spacedisk-structuresmongodbmongodb-3.0operator

We run MongoDB (currently 3.0.6) as a Service. MongoDB is running inside Docker Container with a small volume 8 GB where mongod data files are stored permanent. The volume can't be extended. This is an automation and business constraint.

Customer doesn't see disk size (df -h) and has only rights dbOwner. So a db.stats() doesn't work.

> db.getUser("rfJpoljpiG7rIn9Q")
{
  "_id": "RuhojEtHMBnSaiKC.rfJpoljpiG7rIn9Q",
  "user": "rfJpoljpiG7rIn9Q",
  "db": "RuhojEtHMBnSaiKC",
  "roles": [
    {
      "role": "dbOwner",
      "db": "RuhojEtHMBnSaiKC"
    }
  ]
}

I tested out howto fill up the volume and have questions.

After create empty DB:

# df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdm        7.8G  6.2G  1.2G  84% /data/19a39418-320e-4557-8495-2e79fcbe1ca4

I did a loop with GridFS uploads. Various sizes and different data.

$ mongofiles -h localhost --port 3000 -d xxx -u xxx -p xx put test.pdf 
2016-06-03T14:26:49.244+0200    connected to: localhost:3000
added file: test.pdf

I see soon this in logs

2016-06-03T13:04:51.731+0000 I STORAGE  [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes...
2016-06-03T13:04:51.744+0000 I STORAGE  [FileAllocator] FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back
2016-06-03T13:04:51.748+0000 I STORAGE  [FileAllocator] error: failed to allocate new file: /data/work/mongodb/data/RuhojEtHMBnSaiKC.7 size: 2146435072 failure creating new datafile; lseek failed for fd 25 with errno: errno:2 No such file or directory.  will try again in 10 seconds
2016-06-03T13:05:01.749+0000 I STORAGE  [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes...
2016-06-03T13:05:01.756+0000 I STO^C

# df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/vdm        7.8G  6.2G  1.2G  84% /data/19a39418-320e-4557-8495-2e79fcbe1ca4

and found explanation

Because our ObjectRocket instances run with the smallfiles option, the
first extent is allocated as 16MB. These extents double in size until
they reach 512MB, after which every extent is allocated as a 512MB
file. So our example "ocean" database has a file structure as follows:

These extents store both the data and indexes for our database. With
MongoDB, as soon as any data is written to an extent, the next logical
extent is allocated. Thus, with the above structure, ocean.6 likely
has no data at the moment, but has been pre-allocated for when ocean.5
becomes full. As soon as any data is written to ocean.6, a new 512MB
extent, ocean.7, will again be pre-allocated. When data is deleted
from a MongoDB database, the space is not released until you compact
— so over time, these data files can become fragmented as data is deleted (or if a document outgrows its original storage location
because additional keys are added). A compaction defragments these
data files because during a compaction, the data is replicated from
another member of the replica set and the data files are recreated
from scratch.

filesystem view

# ls -alh
total 6.2G
drwxr-xr-x. 5 chrony ssh_keys 4.0K Jun  3 15:00 .
drwxr-xr-x. 5 chrony ssh_keys 4.0K Jun  3 13:20 ..
drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun  3 13:20 admin
-rw-------. 1 chrony ssh_keys  64M Jun  3 14:01 admin.0
-rw-------. 1 chrony ssh_keys  16M Jun  3 14:01 admin.ns
drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun  3 13:20 local
-rw-------. 1 chrony ssh_keys  64M Jun  3 13:20 local.0
-rw-------. 1 chrony ssh_keys  16M Jun  3 13:20 local.ns
-rwxr-xr-x. 1 chrony ssh_keys    2 Jun  3 13:20 mongod.lock
-rw-------. 1 chrony ssh_keys  64M Jun  3 15:58 RuhojEtHMBnSaiKC.0
-rw-------. 1 chrony ssh_keys 128M Jun  3 15:58 RuhojEtHMBnSaiKC.1
-rw-------. 1 chrony ssh_keys 256M Jun  3 15:58 RuhojEtHMBnSaiKC.2
-rw-------. 1 chrony ssh_keys 512M Jun  3 15:58 RuhojEtHMBnSaiKC.3
-rw-------. 1 chrony ssh_keys 1.0G Jun  3 15:58 RuhojEtHMBnSaiKC.4
-rw-------. 1 chrony ssh_keys 2.0G Jun  3 15:26 RuhojEtHMBnSaiKC.5
-rw-------. 1 chrony ssh_keys 2.0G Jun  3 15:58 RuhojEtHMBnSaiKC.6
-rw-------. 1 chrony ssh_keys  16M Jun  3 15:58 RuhojEtHMBnSaiKC.ns
-rw-r--r--. 1 chrony ssh_keys   69 Jun  3 13:20 storage.bson
drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun  3 16:03 _tmp

I see this error with GridFS (but there are several GBs storage available, I only upload 1 MB files with loop)

2016-06-03T16:34:42.454+0200    Failed: error while storing 'test-94.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:42.623+0200    connected to: localhost:3000
2016-06-03T16:34:42.917+0200    Failed: error while storing 'test-95.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:43.090+0200    connected to: localhost:3000
2016-06-03T16:34:43.412+0200    Failed: error while storing 'test-96.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:43.535+0200    connected to: localhost:3000
2016-06-03T16:34:43.817+0200    Failed: error while storing 'test-97.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:43.948+0200    connected to: localhost:3000
2016-06-03T16:34:44.184+0200    Failed: error while storing 'test-98.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:44.306+0200    connected to: localhost:3000
2016-06-03T16:34:44.619+0200    Failed: error while storing 'test-99.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:44.783+0200    connected to: localhost:3000
2016-06-03T16:34:45.071+0200    Failed: error while storing 'test-100.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:45.197+0200    connected to: localhost:3000
2016-06-03T16:34:45.497+0200    Failed: error while storing 'test-101.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:45.626+0200    connected to: localhost:3000
2016-06-03T16:34:45.891+0200    Failed: error while storing 'test-102.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:46.063+0200    connected to: localhost:3000
2016-06-03T16:34:46.333+0200    Failed: error while storing 'test-103.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:46.493+0200    connected to: localhost:3000
2016-06-03T16:34:46.792+0200    Failed: error while storing 'test-104.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:46.921+0200    connected to: localhost:3000
2016-06-03T16:34:47.186+0200    Failed: error while storing 'test-105.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:47.316+0200    connected to: localhost:3000
2016-06-03T16:34:47.628+0200    Failed: error while storing 'test-106.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:47.772+0200    connected to: localhost:3000
2016-06-03T16:34:48.017+0200    Failed: error while storing 'test-107.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:48.144+0200    connected to: localhost:3000
2016-06-03T16:34:48.405+0200    Failed: error while storing 'test-108.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:48.528+0200    connected to: localhost:3000
2016-06-03T16:34:48.809+0200    Failed: error while storing 'test-109.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:48.982+0200    connected to: localhost:3000
2016-06-03T16:34:49.250+0200    Failed: error while storing 'test-110.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:49.388+0200    connected to: localhost:3000
2016-06-03T16:34:49.645+0200    Failed: error while storing 'test-111.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:49.779+0200    connected to: localhost:3000
2016-06-03T16:34:50.021+0200    Failed: error while storing 'test-112.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:50.158+0200    connected to: localhost:3000
2016-06-03T16:34:50.432+0200    Failed: error while storing 'test-113.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:50.552+0200    connected to: localhost:3000
2016-06-03T16:34:50.803+0200    Failed: error while storing 'test-114.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:50.924+0200    connected to: localhost:3000
2016-06-03T16:34:51.219+0200    Failed: error while storing 'test-115.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:51.342+0200    connected to: localhost:3000
2016-06-03T16:34:51.601+0200    Failed: error while storing 'test-116.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:51.734+0200    connected to: localhost:3000
2016-06-03T16:34:52.028+0200    Failed: error while storing 'test-117.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:52.185+0200    connected to: localhost:3000
2016-06-03T16:34:52.460+0200    Failed: error while storing 'test-118.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:52.593+0200    connected to: localhost:3000
2016-06-03T16:34:52.858+0200    Failed: error while storing 'test-119.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:52.991+0200    connected to: localhost:3000
2016-06-03T16:34:53.292+0200    Failed: error while storing 'test-120.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:53.432+0200    connected to: localhost:3000
2016-06-03T16:34:53.749+0200    Failed: error while storing 'test-121.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:53.884+0200    connected to: localhost:3000
2016-06-03T16:34:54.148+0200    Failed: error while storing 'test-122.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:54.344+0200    connected to: localhost:3000
2016-06-03T16:34:54.610+0200    Failed: error while storing 'test-123.tmp' into GridFS: new file allocation failure

2016-06-03T16:34:54.743+0200    connected to: localhost:3000
2016-06-03T16:34:55.048+0200    Failed: error while storing 'test-124.tmp' into GridFS: new file allocation failure

why?

Also strange that no new entries in mongodb.log. Why?

2016-06-03T13:04:43.996+0000 I ACCESS   [conn9] Unauthorized not authorized on admin to execute command { serverStatus: 1.0 }
2016-06-03T13:04:51.731+0000 I STORAGE  [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes...
2016-06-03T13:04:51.744+0000 I STORAGE  [FileAllocator] FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back
2016-06-03T13:04:51.748+0000 I STORAGE  [FileAllocator] error: failed to allocate new file: /data/work/mongodb/data/RuhojEtHMBnSaiKC.7 size: 2146435072 failure creating new datafile; lseek failed for fd 25 with errno: errno:2 No such file or directory.  will try again in 10 seconds
2016-06-03T13:05:01.749+0000 I STORAGE  [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes...
2016-06-03T13:05:01.756+0000 I STO

There should be a log entry for new connections. No new line since hours. But the database is online, at least with mongo shell.

I decided to insert random data. Used this example script

$ mongo mongodb://xxx:xxx@localhost:3000/RuhojEtHMBnSaiKC --eval "var arg1=50000000;arg2=1" create_random_data.js 
Job#1 inserted 49400000 documents.
Job#1 inserted 49500000 documents.
Job#1 inserted 49600000 documents.
Job#1 inserted 49700000 documents.
Job#1 inserted 49800000 documents.
Job#1 inserted 49900000 documents.
Job#1 inserted 50000000 documents.
Job#1 inserted 50000000 in 1538.035s

Also other example script with random strings

function randomString() { 
        var chars = 
"0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz"; 
        var randomstring = ''; 
        var string_length = 10000000;
        for (var i=0; i<string_length; i++) { 
                var rnum = Math.floor(Math.random() * chars.length); 
                randomstring += chars.substring(rnum,rnum+1); 
        } 
        return randomstring; 
} 

for(var i=0; i<2000000; i++){db.test.save({x:i, data:randomString()});} 


Inserted 1 record(s) in 3199ms
Inserted 1 record(s) in 3059ms
Inserted 1 record(s) in 3264ms
Inserted 1 record(s) in 3279ms
Inserted 1 record(s) in 3187ms
Inserted 1 record(s) in 3133ms
Inserted 1 record(s) in 2999ms
Inserted 1 record(s) in 3220ms
Inserted 1 record(s) in 2966ms
Inserted 1 record(s) in 3161ms
Inserted 1 record(s) in 3165ms
Inserted 1 record(s) in 3154ms
Inserted 1 record(s) in 3362ms
Inserted 1 record(s) in 3288ms
Inserted 1 record(s) in 3184ms
new file allocation failure
new file allocation failure
new file allocation failure
new file allocation failure
new file allocation failure
new file allocation failure
new file allocation failure

read only access still works

> db.test.find();
{
  "_id": ObjectId("5751a595b9f7999857650c13"),
  "x": 0,
  "data": "xFmUATFIEWao4moOZ0SknNo56dg49TTyQcVgGBTeyE2RUKr7WQ6s0BpmhvSlrAuTBDGpZfPDGtfRrNLSpA8PcbNMkWfCoFFMevCC"
}
{
  "_id": ObjectId("5751a595b9f7999857650c14"),
  "x": 1,
  "data": "IKsbGictFAtcgfMfUggzfHZSiPreWW3Tm8ik8tgLDERWUo2P1Lh2RKBardHUhaEZfuaaM7ofFRGKKHSFwGNcUQA051mMgOxpNvbN"
}
{
  "_id": ObjectId("5751a595b9f7999857650c15"),
  "x": 2,
  "data": "MXQySK5RsMrXTw8JuRzxIeAaxSgNhXdkFzOhcbZZcsTSU7T1sBLTyps7mw0vlGaOzCvJQz08BKr9ALXEPKpl3REUGZMTAx3wccur"
}
{
  "_id": ObjectId("5751a595b9f7999857650c16"),
  "x": 3,
  "data": "qE8tyTTLfvNNIlih2g19bnTiRIBtyEN0ySUn1vbBIsMeH3JOL1MsWufDkTq0KgQGzgl8EAM8gRSqVJCSxDeTAqgDbcsvch5SKeQy"
}
{
  "_id": ObjectId("5751a595b9f7999857650c17"),
  "x": 4,
  "data": "hl4QThNWH6vMBxzGxvZUMAsrhneRGpodJlpSxJmdulxZtLdQ0tyTM8wmy0H4Xl8okTNfCih01unNh0wMVJT1kZqc6DeQOZ2PRPBV"
}
{
  "_id": ObjectId("5751a595b9f7999857650c18"),
  "x": 5,
  "data": "MWVMuFKZo4G2cBVGGWN1DT7JpsEACQnnB2eTJTdcblO5Ne8OaI4tAcrxzdSAdINBoB5yLyizy042xELLczth5BVx1gdl9Ib0peeC"
}
{
  "_id": ObjectId("5751a595b9f7999857650c19"),
  "x": 6,
  "data": "gTmlVLlnG1WJCAWiHZuZw8Le5HnEXCw30kUSGAlTdMBR1veTGQ6AyCRL1n5uZEAdg3rBR3VMudrgm1ey3IduvH3gXaDSZoUwpduw"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1a"),
  "x": 7,
  "data": "fW99TOO2bVM1oU6iyElq1HFiMbXNLFwCCxGRObLqoLIHbHudezyLoOio6N5rUmxodh3TAJoTcSzVTGH76EfWZSxSzX3tXMF4icWW"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1b"),
  "x": 8,
  "data": "dSDbXwbN40TrTaM82Tbatzn6kkM5UeAFuvhTeZqaUvlkipBAizd1WJbqSbCJane9mvWTMTz9uTI4ZSO6CDHTf3PClVSUsE1peUku"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1c"),
  "x": 9,
  "data": "DTM0CX22mW4BS9NKlQq9xnFiJuo3USB5BTvoP2ZPWex5UV6Bc4M2eu2k3yp95Pkq0C5zHpocHNloFTdbTGuDxwncpvnKX4kcscx4"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1d"),
  "x": 10,
  "data": "3sapecVli8PffDV2qLKXTcRT6riFMwXGeduWibJDLlJOJhP1v04ytswW5mNmdKXGul2BRnBrg7MEzyDJNTru3vGwCRlila8L8qCF"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1e"),
  "x": 11,
  "data": "QHIiGrWqHoNKBV8XQX9v5QlRSq2b1hCQuISJgOp9It4oUqM8a85uysqn2V2belW1xzfw6Gcg5S3qOBg319dXmGy2UZTBtbfn2TEP"
}
{
  "_id": ObjectId("5751a595b9f7999857650c1f"),
  "x": 12,
  "data": "mUJ1Q8xSsVlAGo30Mbr2z6BTrzZqaRks1J3c02rHwb0NFyyS1dPbSsZGpsQtD0exI9urTBsPAmvZwLTpEVSFbV4XIvlifBy9drCI"
}
{
  "_id": ObjectId("5751a595b9f7999857650c20"),
  "x": 13,
  "data": "W4KfIuqc7yqniahz2acLJ6cvag4PF45EkWr4JliobmA7fWzZdGim2TBp1D17Q1PVaszPUPNMUZ71FacwJqNNXmVnV0t5fRkTEFsJ"
}
{
  "_id": ObjectId("5751a595b9f7999857650c21"),
  "x": 14,
  "data": "07rLpFT2OIKkswbdwzXaKfW25noO9DFOSF4iQAb0p9yNAZxtfqPGuFUD2h0qxhkf2eln4r2GEqu7amTG6CxxbS8E0iGZR0JHV5Zc"
}
{
  "_id": ObjectId("5751a595b9f7999857650c22"),
  "x": 15,
  "data": "gt8EvTRTfGm9KPWCgJBEkBAZrE4Qo2X5B8I8RDUv1QMcIczPTvHEdVFHUt9Tfiep4AEyi7dmUbOL1RxZDQ3pPebBOi5ocMsektTo"
}
{
  "_id": ObjectId("5751a596b9f7999857650c23"),
  "x": 16,
  "data": "CBENcWObIsQF0TBxvmlXTvFi8l7AmVFKRgvSJ5WgRMUlbeu9NAgVpMcA8vsTWgTStoNreLnJNKLfGLfpfZEK9JiUeGfkx18WpQGO"
}
{
  "_id": ObjectId("5751a596b9f7999857650c24"),
  "x": 17,
  "data": "Fiqb0haZFi8x2XdvgDy3ok5epdnOeCN3RWg5fspza6ExyCSgCv3qwRqxzAO1SvFMCfww9nIa6UhmI7WUEwnSCLHkywNDh5g6qf6g"
}
{
  "_id": ObjectId("5751a596b9f7999857650c25"),
  "x": 18,
  "data": "TR1hXOkJcgRyh44H5HvluNxcHluAKtDyoP35Bpw2xN46kL5vUKSgbedzlTdnd0mT7oylRbTqTcOuZ5qwwBpnug8ft8frRnnkoPhN"
}
{
  "_id": ObjectId("5751a596b9f7999857650c26"),
  "x": 19,
  "data": "x1WE7ccb1Dyis4ggEGHNPcTez4BqT6TbiT0d9fXnr1bkXe2XZTTC1ZGnLxP4DRPtgeQ6aZ32kpiyrM4IUOAqcx7EkKKZIbMPNm68"
}
Fetched 20 record(s) in 72ms -- More[true]

I can even insert tiny documents

> db.users.insertMany(    [      { name: "bob", age: 42, status: "A", },      { name: "ahn", age: 22, status: "A", },      { name: "xi", age: 34, status: "D", }    ] )
{
  "acknowledged": true,
  "insertedIds": [
    ObjectId("5751a807758c56125f57a556"),
    ObjectId("5751a807758c56125f57a557"),
    ObjectId("5751a807758c56125f57a558")
  ]
}

> db.stats(1024);
{
  "db": "RuhojEtHMBnSaiKC",
  "collections": 8,
  "objects": 12364085,
  "avgObjSize": 442.12763435385637,
  "dataSize": 5338382.47265625,
  "storageSize": 5535032,
  "numExtents": 50,
  "indexes": 6,
  "indexSize": 392639.625,
  "fileSize": 6223872,
  "nsSizeMB": 16,
  "extentFreeList": {
    "num": 0,
    "totalSize": 0
  },
  "dataFileVersion": {
    "major": 4,
    "minor": 22
  },
  "ok": 1
}

file descriptors inside container

dr-x------. 2 mongod mongod  0 Jun  3 11:20 .
dr-xr-xr-x. 8 mongod mongod  0 Jun  3 11:20 ..
lr-x------. 1 mongod mongod 64 Jun  3 11:20 0 -> /dev/null
l-wx------. 1 mongod mongod 64 Jun  3 11:20 1 -> pipe:[1433257418]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 10 -> /data/work/mongodb/data/admin.0
lrwx------. 1 mongod mongod 64 Jun  3 16:00 11 -> /data/work/mongodb/data/local.ns
lrwx------. 1 mongod mongod 64 Jun  3 16:00 12 -> /data/work/mongodb/data/local.0
lrwx------. 1 mongod mongod 64 Jun  3 16:00 13 -> socket:[1437446183]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 14 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.ns
lrwx------. 1 mongod mongod 64 Jun  3 16:00 15 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.0
lrwx------. 1 mongod mongod 64 Jun  3 16:01 16 -> socket:[1438081191]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 17 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.1
l-wx------. 1 mongod mongod 64 Jun  3 11:20 2 -> pipe:[1433257419]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 20 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.2
lrwx------. 1 mongod mongod 64 Jun  3 16:00 21 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.3
lrwx------. 1 mongod mongod 64 Jun  3 16:00 22 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.4
lrwx------. 1 mongod mongod 64 Jun  3 16:00 23 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.5
lrwx------. 1 mongod mongod 64 Jun  3 16:00 24 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.6
lr-x------. 1 mongod mongod 64 Jun  3 11:20 3 -> /dev/urandom
l-wx------. 1 mongod mongod 64 Jun  3 11:20 4 -> /data/work/mongodb/logs/mongod.log
lr-x------. 1 mongod mongod 64 Jun  3 11:20 5 -> /dev/urandom
lrwx------. 1 mongod mongod 64 Jun  3 11:20 6 -> socket:[1433261184]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 7 -> socket:[1433261185]
lrwx------. 1 mongod mongod 64 Jun  3 16:00 8 -> /data/work/mongodb/data/mongod.lock
lrwx------. 1 mongod mongod 64 Jun  3 16:00 9 -> /data/work/mongodb/data/admin.ns
bash-4.2$ pwd
/proc/1/fd

now did a drop

> show collections
fs.chunks      → 3733.953MB / 3736.602MB
fs.files       →    0.021MB /    0.039MB
randomData     → 1318.577MB / 1506.949MB
system.indexes →    0.001MB /    0.008MB
system.profile →    0.105MB /    1.000MB
test           →  160.600MB /  160.664MB
users          →    0.008MB /    0.039MB

> db.test.drop();
true

Now my random data is not working anymore

function randomString() { 
        var chars = 
"0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz"; 
        var randomstring = ''; 
        var string_length = 100000000;
        for (var i=0; i<string_length; i++) { 
                var rnum = Math.floor(Math.random() * chars.length); 
                randomstring += chars.substring(rnum,rnum+1); 
        } 
        return randomstring; 
} 

for(var i=0; i<2000000; i++){db.test.save({x:i, data:randomString()});}

the test collection doesn't get created

Why I can use only 5.12 of total 7.8G?

> db.stats(1024);
{
  "db": "RuhojEtHMBnSaiKC",
  "collections": 7,
  "objects": 12360157,
  "avgObjSize": 428.64359376664873,
  "dataSize": 5173927.84765625,
  "storageSize": 5370512,
  "numExtents": 45,
  "indexes": 5,
  "indexSize": 392503.890625,
  "fileSize": 6223872,
  "nsSizeMB": 16,
  "extentFreeList": {
    "num": 7,
    "totalSize": 165160
  },
  "dataFileVersion": {
    "major": 4,
    "minor": 22
  },
  "ok": 1
}

the database seem to be in a strange state.

How can Ops do volume administration with least effort and happy customers?

Best Answer

You mention the smallfiles option (from ObjectRocket docs) but your ls output suggests that you are not actually using it. If you were, then your maximum file size would be 512MB but you have 2GB files (the default). It also explains your issues.

As soon as you fill up your existing data files and another write comes in (it's a little more complicated than this, but a good way to think about it), MongoDB will try to allocate a new data file - again at 2GB. You don't have enough space for a new 2GB file, hence you get the errors and failures.

Therefore if you turn on smallfiles you will be able to use more space and get closer to the maximum usage on the volume. The pre-allocation option can also be tweaked but that is not as important in 3.0 as it would have been in older versions (MMAP pre-alloc was tweaked in later versions).

Finally, as mentioned elsewhere you can also try WiredTiger, though I would recommend upgrading to 3.2 first (where it is now the default storage engine). WiredTiger has the option to use compression, has Snappy on by default, and has more aggressive options available to you so that you can essentially trade CPU for disk space efficiency (I analysed the impact of the various options here some time ago for reference).