Last week Manish Devgan, Head of Product Management for Terracotta and Matthias Braeger, Software Engineer at CERN presented at Strata. Their talk was focused on how CERN is unlocking big data using technologies such as Hadoop and Terracotta BigMemory. The session was packed, standing room only, with some very insightful questions at the end. Here’s a brief summary of the session for those of you who could not attend.
Matthias kicked off the session by providing a brief overview of CERN and how the world’s largest machine – the Large Hadron Collider (LHC) is setup. Then he explained that even though they generate 30PB/year, they only use Hadoop to store the metadata. The rest of the data is stored on tape as the cost of storage is still a lot less compared to disks as tapes do not consume any electricity when the data is not being accessed. CERN uses Terracotta’s in-memory solution to store sensor data for real-time access to monitoring sensor and alarms.
Manish followed Matthias with a need for building a data layer like Terracotta’s In-Memory Data Fabric. He went on to describe how it can operationalize Hadoop and play a critical role in streaming analytics. He also reviewed the data management and analytics landscape including NoSQL, NewSQL, In-Memory data grid, Spark, and Hadoop. Next was a detailed overview of how CERN is using the Terracotta offering in their environment for high-performance monitoring and to support big data volumes without any scalability or availability challenges. The details are available in the attached presentation here and demonstrate how critical it is to deploy an in-memory platform when dealing with high-volume and high-velocity data.
If you have any questions about the session or would like to share your thoughts, please leave a comment below.