Key Trends for the World’s Data-driven Future

In-Memory Data Management: Solving Telecom's Big Data Pains

This is a guest post by Craig Panigiris from Professional Advantage

The average Western city-dweller is now exposed to more data in a day than 15th century people encountered in their entire lifetime. And that’s just observable data – i.e. the information we physically see and hear on Youtube, 24 hour TV, digital billboard advertising, and the myriad of other informational stimuli generated by modern society.

Below the surface however lurks the bulk of the data iceberg – the complex world of big data.

Big data is what makes everything from Netflix to next generation transport systems what they are to the end user – innovative, responsive and highly personalised. Or, more crucially, it’s the processing, collection, analysis and utilisation of data in an effective manner that’s driving technological innovation.

So as a species evolutionarily hell-bent on making things easier for itself, how will our desire for a more user-friendly future change the way we use big data?

Ignorance is Bliss

With the continuing advance of computational capabilities, the generation of bafflingly voluminous data sets increases as well. Take the Large Hadron Collider as an example. If you were to try and capture all of the collision data it generates, you’d be up against 500 quintillion (5×1020) bytes per day – almost 200 times more than all the other data combined in the world. That would, of course, be impossible. And that’s the point.

Making use of big data begins with knowing how much of it to make use of. In the case of the Large Hadron Collider, that means working with 0.001% of the 600 million collisions per second, which boils down to around 200 significant collisions. This amounts to 30 petabytes of data per year, for which CERN uses Hadoop to store only the metadata. Meanwhile, Terracotta’s in-memory solutions store only the data related to the monitoring of sensors and alarms.

As for the remaining collision data that’s captured? It’s stored on tapes.

So most of what’s happening in the Large Hadron Collider goes uncaptured or gets cataloged. And while the collider is undoubtedly an extreme case, it’s a scalable analogy that applies to the future of big data in just about every application; as big data gets bigger, it will become increasingly important to focus analytical efforts only on what data is practical and useful. People who can interpret data on a meta level and then make smart decisions based on what they see will therefore find their professional stock in increasingly high demand.

Adapt, Adopt, Improve

As technology platforms continue to proliferate, so too do the data sets generated by those platforms. And ever since the 1970s, the pace of data proliferation has perpetually outstripped our ability to capture and interpret data efficiently. This means that where big data’s role in the future of technological innovation is concerned, we must accept that our natural urge to automate and save labor is completely at odds with our innate desire to streamline and simplify. Or as Kurt Marko put it in his Forbes piece entitled ‘Big Data Glut Fuels New Analytic Tools, Services and Start-ups’: “superabundant data offers business benefits that are too compelling to sacrifice on the altar of IT simplicity.”

So the successful organisations of the future will be those who focus their resources and energy on adopting, adapting and improving available technology to manage data in as efficient a manner as possible.
Rather than some utopian one-size-fits-all holy grail, the future of big data will be built on taking the crucial components – be they, Quid, BitYota, Hadoop, Interana etc. – and, by any means necessary, lashing them together to produce the raw materials from which valuable combined insights can be gleaned.

More Speed, Less Haste

Along with the time demands involved in interpreting increasingly large data sets, the explosion in velocity of data also poses problems around latency. To put it more simply, there’s more data being generated more quickly – and because time is money, it needs to be processed and interpreted as quickly and efficiently as possible.

Making use of big data in the future will therefore not only hinge on minimising data latency in order to allow faster decisions to be made, it will also require the analysis and derivation of insight from big data in real time wherever possible.

And just as the past five years saw demand for data scientists skyrocket, the next five years could well see the rise of the “real-time” data scientist – the brightest of the bright who can make quick decisions based on trends they see in real time data streams.

So as Linda Burtch wrote in Wired, “tell your kids to be data scientists, not doctors.”

About the Author

Craig Panigiris is an experienced marketing professional with a 5+ years’ experience in the ICT industry and 10+ years’ experience in B2B marketing. At Professional Advantage, Craig leads a team of marketing professionals to deliver innovative marketing strategies including; cross-channel demand generation campaigns, brand & collateral development, and public relations and is responsible for the development and implementation of the overall strategies for Professional Advantage building repeatable predictable demand generation. He can be contacted at Craig.Panigiris@pa.com.au


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>