Easing Big Pharma’s Big Data Pain

in-memory cures pharma company pains

Big Pharma has been dealing with Big Data since before it was a trend. It can take years to discover, test, and launch new drugs, and one reason is that each step involves processing large data sets. In-memory computing can reduce the  time it takes to launch new drugs by speeding access to this data.

Three areas of drug development jump out as Big Data pain points for which in-memory computing can provide a cure:

1. Faster Drug Discovery: A Forbes report recently estimated that the cost of getting a new drug to market is upwards of $4 billion. Not surprising. Maintaining detailed records of tests and trial results and identifying efficacy and side-effects from these huge data sets can be time-consuming with using traditional disk-bound databases. In fact, without the right tools, pharmaceutical firms are forced to choose between waiting longer for results or confining research to a small sample, which can be risky.  An in-memory data solution, which can be deployed in just a few weeks, mitigates this risk by processing full data sets within defined performance and service level agreements (SLAs). The results are higher drug discovery success rates and lower costs.

2. Simpler Compliance:  The list of laws and regulations around Big Pharma grows every month.  As a result, companies are spending more time and money on compliance and legal activities. In the United States, Big Pharma has to be compliant with FDA regulatory policies, recalls, cGMP, and current and old FDA 483′s, and each requirements has its own data set.  With all the relevant compliance data sets loaded into memory, you can have immediate access to multiple data sets and run checks to confirm compliance in real time.

3. Real-time Analytics: Drug discovery also requires sifting through large databases like CDC Wonder, Centers for Medicare and Medicaid Services, and the Health Indicators Warehouse. In-memory data management speeds the analysis by holding up to hundreds of terabytes in memory, removing disk I/O constraints. With in-memory computing, you get extremely low, predictable latency, even under load, so data volumes can grow without impacting performance.

Terracotta BigMemory, a market-leading in-memory computing solution, can help with the above use cases and many more. What Big Data pain points does this bring to mind for your organization? Post a comment, or drop me a line at gagan@terracotta.org.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>