Math first: you moved 150M documents in 2 days, which is roughly 860 documents per second including metadata and indices, where reading and writing all occurs on the same machine. That is not what I would call slow. The description coming to my mind is "lightning fast". ;)
Since there is no real distribution of the write load, an easy way to speed things up is to add two or more machines.
A few notes: sharding production data on non-replica shards is dangerous, to say the least. If one of the shards fails, the data contained is permanently unavailable until you get the shard up and running again. Plus, since there was no server to write to for a specific key range, values of that range can not be written. If the shard was a replica set, the failure would lead to the election of a new primary, to which all writes and (depending on the configuration) most or even all reads would go.
_id
can be used as a shard key, however it should be hashed.
Lot to go through here, so I'll take it piece by piece, first off splitting:
I thought this meant that when a chunk hits 64mb, it splits into two
equal chunks both of size 32mb. That's what is demonstrated here. Is
that not correct?
That's not quite how it works. If you have a 64MB chunk and you manually run a splitFind command, you will get (by default) 2 chunks split at the mid-point. Auto splitting is done differently though - the details are actually quite involved, but use what I explain as a rule of thumb and you will be close enough.
Each mongos
tracks how much data it has seen inserted/updated for each chunk (approximately). When it sees that ~20% of the maximum chunk size (so 12-13MiB by default) has been written to a particular chunk it will attempt an automatic split of that chunk. It sends a splitVector command to the primary that owns the chunk asking for it to evaluate the chunk range and return any potential split points. If the primary replies with valid points, then the mongos will attempt to split on those points. If there are no valid split points, then the mongos will retry this process when it the updates/writes get to 40%, 60% of the max chunk size.
As you can see, this does not wait for a chunk to reach the max size before splitting, in fact it should happen long before that and with a normally operating cluster you should not see such large chunks in general.
What's up with this? How can the first 3 shards/replica sets have an
average size greater than 64mb when that is set to be chunkSize? Rs_2
is 119mb!
The only thing preventing large chunks from occurring is the auto-split functionality described above. Your average chunk sizes suggest that something is preventing the chunks from being split. There are a couple of possible reasons for this, but the most common is that the shard key is not granular enough.
If your chunk ranges get down to a single key value then no further splits are possible and you get "jumbo" chunks. I would need to see the ranges to be sure, but you can probably manually inspect them easily enough from sh.status(true)
but for a more easily digestible version take a look at this Q&A I posted about determining the chunk distribution.
If that is the issue you only really have 2 choices - either live with the jumbo chunks (and possibly increase the max chunk size to allow them to move around - anything over the max will be aborted and tagged as "jumbo" by the mongos), or re-shard the data with a more granular shard key that prevents the creation of single key chunks.
Rs_2 has 27.53% of the data when it should have 16.6%.
This is a fairly common misconception about the balancer - it does not balance based on the data size, it just balances the number of chunks (which you can see are nicely distributed) - from that perspective a chunk with 0 documents in it counts just the same as one with 250k documents. Hence, the reason for the imbalance in terms of the data is because of the imbalance in the chunks themselves (some contain a lot more data than others).
What should I do here? I can manually find chunks that are large and
split them, but that is a pain. Do I lower my chunkSize?
Lowering the chunk size would cause the mongos to check for split points more frequently, but it won't help if the splits are failing (which your chunk size averages suggest is the case), it will just fail more often. As a first step I would find the largest chunks (see the Q&A link above) and split those as a priority first.
If you are going to do any manual splitting or moving, I recommend turning off the balancer so that it is not holding the meta data lock and does not kick in as soon as you start splitting. It's also generally a good idea to do it at a low traffic time because otherwise the auto-splitting I described above could interfere also.
After a quick search I don't have anything generic immediately to hand, but I have in the past seen scripts used to automate this process. It tends to need to be customized to fit the particular issue (imagine an imbalance due to a monotonic shard key versus an issue with chunk data density for example).
Best Answer
Update: April 2018
This answer was correct at the time of the question, but things have moved on since then. Since version 3.4 parallelism has been introduced, and the ticket I referenced originally has been closed. For more information I cover some of the details in this more recent answer. I will leave the rest of the answer as-is because it remains a good reference for general issues/constraints as well as valid for anyone on an older version.
Original Answer
I give a full explanation of what happens with a chunk migration in the M202 Advanced course if you are interested. In general terms, let's just say that migrations are not very fast, even for empty chunks, because of the housekeeping being performed to make sure migrations work in an active system (these still happen even if nothing but balancing is happening).
Additionally, there is only one migration happening at a time on the entire cluster - there is no parallelism. So, despite the fact that you have two "full" nodes and two "empty" nodes, at any given time there is at most one migration happening (between the shard with the most chunks and the shard with the least). Hence, having added 2 shards gains you nothing in terms of balancing speed and just increases the number of chunks which have to be moved.
For the migrations themselves, the chunks are likely ~30MiB in size (depends on how you populated data, but generally this will be your average with the default max chunk size). You can run
db.collection.getShardDistribution()
for some of that information, and see my answer here for ways to get even more information about your chunks.Since there is no other activity going on, for a migration to happen the target shard (one of the newly added shards) will need to read ~30MiB of data from the source shards (one of the original 2) and update the config servers to reflect the new chunk location once it is done. Moving 30MiB of data should not be much of a bottleneck for a normal system without load.
If it is slow, there are a number of possible reasons why that is the case, but the most common for a system that is not busy are:
w:2
orw:majority
is used by default and requires up to date secondaries to satisfy it.If the system was busy then memory contention, lock contention would usually be suspects here too.
To get more information about how long migrations are taking, if they are failing etc., take a look at the entries in your
config.changelog
:As you have seen, and as I generally tell people when I do training/education, if you know you will need 4 shards, then it's usually better to start with 4 rather than ramp up. If you do, then you need to be aware that adding a shard can take a long time, and initially is a net negative on resources rather than a gain (see part II of my sharding pitfalls series for a more detailed discussion of that).
Finally, to track/upvote/comment on the feature request to improve the parallelism of chunk migrations, check out SERVER-4355