MongoDB document depth headache

We ran into a weird problem recently where we were unable to sync a replica set running MongoDB 3.4 when adding new members to the replica set.

The sync would begin, but at some point during the sync it would always fail with:

[replication-0] collection clone for 'database.collection' failed due to Overflow:
While cloning collection 'database.collection' there was an error
'While querying collection 'database.collection' there was an error 
'BSONObj exceeded maximum nested object depth: 200''

(For extra annoyance the sync would continue with syncing all the other databases and collections on the replica set, before then only realising it had actually failed earlier at the very end of the sync and then restarting the sync from the beginning again).

 

The error means that one or more documents has a max depth over 200. This could be a chain of objects, or a chain of arrays in a document – a mistake that isn’t too tricky to cause with a buggy loop or ORM.

But how is it possible that this document could be in the database in the first case? Surely it should have been refused at time of insert? Well the nested document limit size and enforcement has changed at various times in past versions and a long-lived database such as ours from early MongoDB 2.x days may have had these bad documents inserted before the max depth limit was enforced and only now when we try to use the document do the limits become a problem.

In our case the document was old, but didn’t have any issues syncing back on Mongo 3.0 but now failed with Mongo 3.4.

Finding the document is tricky – the replication process helpfully does not log the document ID, so you can’t go and purge it from the collection to resolve the issue.

With input from my skilled colleagues with better Mongo skills than I, we figured out three queries that allowed us to identify the bad documents.

1. This query finds any documents that have a long chain of nested objects inside them.

db.collection.find({ $where: function() { return tojsononeline(this).indexOf("} } } } } } } } }") != -1 } })

2. This query finds any documents that have a long chain of nested arrays. This was the specific issue in our case and this query successfully identified all the bad documents.

db.collection.find({ $where: function() { return tojsononeline(this).indexOf("] ] ] ] ] ] ]") != -1 } })

3. And if you get really stuck, you can find any bad document (for whatever reason) by reading the document and then re-writing it back out to another collection. This ensures the document gets all the limits applied at write time and will identify their ID, regardless of the specific reason for them being refused.

db.collection.find({}).forEach(function(d) { print(d["_id"]); db.new_collection.insert(d) });

Note that all of these queries tend to be performance impacting since you’re asking your database to read every single document. And the last one, copying collections, could take considerable time and space to complete.

I recommend restoring the replica set to a test system and performing the operation there where you know it’s not going to impact production if you have any data of notable size.

Once you find your bad document, you can display it with:

db.collection.find({ _id: ObjectId("54492129902178d6f600004f") });

And delete it entirely (assuming nothing important in it!) with:

db.collection.deleteOne({ _id: ObjectId("54492129902178d6f600004f") });

3 thoughts on “MongoDB document depth headache

Leave a Reply