I’ve been asked a lot about the performance differences between Terracotta BigMemory and MongoDB, so I thought I would share the results of a head-to-head comparison between the two in which I was recently involved. The enterprise doing the comparison was a market research firm that had previously used SQL Server, and they were suffering access times as slow as 80 to 100 milliseconds. They knew they could do a lot better.
The company’s systems architect found that BigMemory’s main advantage was speed. MongoDB delivered response times around 10ms, and BigMemory was in the same range when grabbing data across a network connection from a server array. But where BigMemory really shined was in its ability to move/co-locate data into application nodes themselves, where response times dropped to an incredible 0.5ms—for a 95% improvement. BigMemory’s Automatic Resource Control feature transparently (without programmer involvement) moves data between networked in-memory arrays and local tiers to optimize performance.
Another key point of differentiation was ease of deployment. Because BigMemory supports the Ehcache API, it was simple for the team to snap BigMemory right into their existing Java applications. Team members were already familiar with Ehcache, so it was easy for them to get up and running with very little assistance from me.
The systems architect at the company also told me that he was very impressed with BigMemory’s Web dashboard, known as the Terracotta Management Console, for managing and monitoring in-memory data sets. He found that Mongo’s toolset was more of a UI tool for interacting with Mongo, as opposed to the kind of rich management and monitoring metrics provided by BigMemory.
If you’re looking for more on how BigMemory can outperform MongoDB, please reply here to continue the conversation or contact Terracotta sales.