A few patterns are evident in your data:
- The primary activity is updates.
- Updates come for several seconds after query/inserts.
- The criteria fields in finds is indexed.
- Even so, the 'index miss' rate is very high.
- Background flush is very low (as it should be with SSD) so you're not getting I/O contention.
- Lock-average is extremely high.
Based on this, I'm suspecting that your document sizes are created small and continually appended to, and you get some significant variation in final document sizes which renders Mongo's padding factor optimization less useful. If that's happening, you're running into a particular write penalty that goes like this:
- Document is INSERTed, Mongo allocates extra free space due to padding factor. (write: 512B)
- Document is updated. (write 1KB)
- Document is updated. (write 1.5KB)
- Document is updated, but there isn't any adjacent free-space left.
- Mongo moves the Document to a new spot with free space.
- Mongo writes the whole, appended document in a new free block (write 2KB) and marks the old block as available.
- Mongo reindexes every indexable field on the document (depending on what kind of indexes you have, this could be a big write hit).
- Document is updated. (write 512B)
Stub records that are continually appended to are a bit of an anti-pattern with MongoDB due to the penalty of growing beyond the padding factor. You can get away with it if your documents end up about the same size, as the padding factor can compensate.
However, if you have to continually append data to records and can't rely on the padding-factor, you'll have to manually pad at insert time. When you create the record, add enough junk fields to make it close to your average document size, and delete/unset the junk field on your first insert. This will reduce the instance of moves like the above, and should bring your lock-average down.
I also suspect you're running into full table-scans to return records, as that's the 'idx miss' column in the mongostat
output. Update calls do run finds, that's how the system locates the record to update. An index miss invokes a double-read of the system; the first in the index to find it, and a second full table-scan to find the record. Typically, this is caused by there not being enough RAM to hold the index, but may also be caused if an UPDATE modifies an indexed field.
I think that you missed one step. You should add an entry into the keytab file for your principal mongodb/mongodb.centos7.vm@CENTOS7.VM
Suppose that the path for your keytab file is /etc/krb5.keytab. Now run $ktutil
ktutil: add_entry -password -p mongodb/mongodb.centos7.vm@CENTOS7.VM -k 1 -e des-cbc-md4
ktuitl: wkt /etc/krb5.keytab
ktuitl: quit
Now you can run $klist -k /etc/krb5.keytab
and the new entry should appear on the output.
Now, I think you can start mongod service.
Best Answer
There is no such notion as CPU "traffic". A single thread runs on a single Core. Always. While a single core can execute multiple threads and even multiple processes, a single thread can not be split to be executed among multiple cores. That would require the system to understand the purpose of the program, which would be a pretty frightening thing.
So let us assume you have 4 threads and 4 Cores. Now let us assume 3 of those threads do not have much to do (one is logging, one is listening for configuration changes and one accepts new connections and spawn additional threads dealing with those connections), but the other one really has to do some computation. The core the computational intensive thread is attached to will have a higher utilization. And this behavior is even intended – or would you want the system to slow down in accepting new connections just because somebody makes a computational intensive request?