Skip to main content

Experience with MongoDB and Optimizations

 

Experience with MongoDB and Optimizations

  • Before reading below. I would like to point out that this experience is related to version 6.0.14-ent, having 6 shards, each shard having 3 machines, each machine is VM with 140 GB RAM and 2TB SSD. And, we had been hosting almost 36 TB of data.

  • MongoDB is not good with Big Data Joins and/ or Big Data OLAP processing. It is mainly meant for OLTP purposes. 
  • Instead of joining millions of keys between 2 collections. It is better to lookup data of one key from one collection then lookup it in other collection. Thus, merging data from 2 collection for same key.
  • Its better to keep De-normalized data in one document. 
  • Updating a document later is cumbersome. 
  • MongoDB crash if data is overloaded. And, it has long downtime if crashed unlike other databases which fails write to database if disk space achieves certain limit. Thus, keeping database active and running for read traffic.
  • MongoDB needs indexes for fast querying, and indexing can take long time based on size of data.
  • Reading one MongoDB collection and writing to another MongoDB collection can be very slow. To make it fast- one can utilize distributed computing framework like Spark but that leads to data loss scanning full MongoDB collection.
  • Indexes occupy space if data is big. 
  • Always use Sharding. Range Sharding should be avoided because it leads writes going to one shard when bulk data is loaded. Hash Sharding distrubutes write traffic better among shards. 
  • Note that Chunks can grow in size to a high value in MongoDB. As MongoDB do not care about number of Chunks rather it relies more on equal distrubution of data ( size) among shards. 
  • Theoretically, MongoDB says there is no limit to size of collection. But we have seen that as collection size grows above 2 TB - writes to collections slows down. So, it's better to have 12 collections for 1 year rather having 1 collection with 12 years of data.
  • Having multiple collections instead of one can also be advantageous from API application perspective as well. Rather than querying one collection. Application can concurrently fire same query to multiple collections and merge there results. Thus, optimizing response time. 
  • Breaking collection also resolves time it would take build indexes on collection. Also, gives fail safety that in worst case one collection can go bad, not all 12 collections will screw up. 
  • It’s always good practice to ensure connection pool on MongoDB. So, as to not over-utilize TCP pool which in turn can hinder I/O operattions
  • MongoDB by default will have unique index on _id. If possible, try to utilize same for shard key or unique index to avoid an additional index. 
  • Note that you may not specify a unique constraint on a hashed index.
  • Starting in MongoDB 7.0.3 (and 6.0.12 and 5.0.22), you can drop the index for a hashed shard key. This can speed up data insertion for collections sharded with a hashed shard key. It can also speed up data ingestion when using mongosync.
  • It's better to bulk load collections then built indexes on same. Rather than running bulk load on collection which has all the indexes. 

Comments

Popular posts

Spark MongoDB Connector Not leading to correct count or data while reading

  We are using Scala 2.11 , Spark 2.4 and Spark MongoDB Connector 2.4.4 Use Case 1 - We wanted to read a Shareded Mongo Collection and copy its data to another Mongo Collection. We noticed that after Spark Job successful completion. Output MongoDB did not had many records. Use Case 2 -  We read a MongoDB collection and doing count on dataframe lead to different count on each execution. Analysis,  We realized that MongoDB Spark Connector is missing data on bulk read as a dataframe. We tried various partitioner, listed on page -  https://www.mongodb.com/docs/spark-connector/v2.4/configuration/  But, none of them worked for us. Finally, we tried  MongoShardedPartitioner  this lead to constant count on each execution. But, it was greater than the actual count of records on the collection. This seems to be limitation with MongoDB Spark Connector. But,  MongoShardedPartitioner  seemed closest possible solution to this kind of situation. But, it per...




Scala Spark building Jar leads java.lang.StackOverflowError

  Exception -  [Thread-3] ERROR scala_maven.ScalaCompileMojo - error: java.lang.StackOverflowError [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.generic.TraversableForwarder$class.isEmpty(TraversableForwarder.scala:36) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.isEmpty(ListBuffer.scala:45) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.toList(ListBuffer.scala:306) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.result(ListBuffer.scala:300) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.Stack$StackBuilder.result(Stack.scala:31) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.Stack$StackBuilder.result(Stack.scala:27) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.generic.GenericCompanion.apply(GenericCompanion.scala:50) [Thread-3] INFO scala_maven.ScalaCompile...




MongoDB Chunk size many times bigger than configure chunksize (128 MB)

  Shard Shard_0 at Shard_0/xyz.com:27018 { data: '202.04GiB', docs: 117037098, chunks: 5, 'estimated data per chunk': '40.4GiB', 'estimated docs per chunk': 23407419 } --- Shard Shard_1 at Shard_1/abc.com:27018 { data: '201.86GiB', docs: 116913342, chunks: 4, 'estimated data per chunk': '50.46GiB', 'estimated docs per chunk': 29228335 } Per MongoDB-  Starting in 6.0.3, we balance by data size instead of the number of chunks. So the 128MB is now only the size of data we migrate at-a-time. So large data size per chunk is good now, as long as the data size per shard is even for the collection. refer -  https://www.mongodb.com/community/forums/t/chunk-size-many-times-bigger-than-configure-chunksize-128-mb/212616 https://www.mongodb.com/docs/v6.0/release-notes/6.0/#std-label-release-notes-6.0-balancing-policy-changes




AWS EMR Spark – Much Larger Executors are Created than Requested

  Starting EMR 5.32 and EMR 6.2 you can notice that Spark can launch much larger executors that you request in your job settings. For example - We started a Spark Job with  spark.executor.cores  =   4 But, one can see that the executors with 20 cores (instead of 4 as defined by spark.executor.cores) were launched. The reason for allocating larger executors is that there is a AWS specific Spark option spark.yarn.heterogeneousExecutors.enabled (exists in EMR only, does not exist in Open Source Spark) that is set to true by default that combines multiple executor creation requests on the same node into a larger executor container. So as the result you have fewer executor containers than you expected, each of them has more memory and cores that you specified. If you disable this option (--conf "spark.yarn.heterogeneousExecutors.enabled=false"), EMR will create containers with the specified spark.executor.memory and spark.executor.cores settings and will not co...




Hive Count Query not working

Hive with Tez execution engine -  count(*) not working , returning 0 results.  Solution -  set hive.compute.query.using.stats=false Refer -  https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties hive.compute.query.using.stats Default Value:  false Added In: Hive 0.13.0 with  HIVE-5483 When set to true Hive will answer a few queries like min, max, and count(1) purely using statistics stored in the metastore. For basic statistics collection, set the configuration property  hive.stats.autogather   to true. For more advanced statistics collection, run ANALYZE TABLE queries.