Manage Learn to apply best practices and optimize your operations.

Here's how in-memory computing just became cloud's new BFF

Storage and the cloud don't always go hand in hand. That's where in-memory computing comes in. Expert Christopher Tozzi explains how to take advantage of these new cloud options.

What's standing between your cloud and high performance? There's a good chance that it is hard disk I/O rates,...

which are dismally slow by the standards of modern computing.

But there's a solution: in-memory computing. This article explains what in-memory computing is and how to get the most from it in a cloud environment.

Why hard disks are slow

If you run your app in the cloud, you probably don't think much about the disks that store its data. After all, one of the main reasons for migrating workloads to the cloud is to eliminate the need to maintain your infrastructure, including storage arrays, yourself.

Yet, if you find your cloud environment application is exhibiting poor performance, then it might be worth your while to think about the extent to which slow read and write rates from hard disks are causing a bottleneck in your workflow.

Hard disk read/write speeds have not improved much in the past several decades. Disk storage capacity has grown exponentially and disks are more reliable, but hard drives today don't transfer information much faster than they did when Windows NT was the latest and greatest thing in enterprise computing.

In other words, if the data on which your app works needs to read from or be written to a hard disk, your app's performance will be severely constrained.

Why in-memory computing?

In-memory computing offers a solution to this conundrum. As the term implies, in-memory computing performs operations using system memory or RAM as your storage mechanism, rather than traditional hard disks. RAM can read and write data thousands of times faster than the I/O rate of hard disks.

In-memory computing is not a new concept. Creating RAM disks on Linux has been possible for years, providing applications and system processes with access to data that is stored in RAM as if it existed on a normal hard disk.

However, it is now more feasible to leverage in-memory computing in an enterprise due to both the advent of the cloud, which makes it easier to access large volumes of RAM storage, and the development of sophisticated in-memory computing platforms, such as those discussed below.

In-memory computing challenges

Of course, there are some drawbacks to adopting in-memory computing. If there weren't, no one would use traditional hard disks anymore.

One major limitation of in-memory computing is its cost. RAM is less expensive than it was, but it still costs much more than hard disk space. As a result, obtaining an in-memory storage environment large enough to handle your app's various needs can be costly.

In-memory computing is also challenging because data stored in it is not persistent. Whenever a server shuts down, any data that lives in its RAM disappears forever. That's not the case with hard disks, where data persists after a system stops running. In-memory computing, therefore, requires you to have a strategy in place for moving data to persistent storage if you need to retain that data over a long period.

Last, but not least, in-memory computing can be challenging to leverage effectively because not all software is designed to take advantage of it. An app that was written to run on an infrastructure with traditional storage may need modifications to make the most of in-memory computing.

In-memory computing and the cloud

The cloud is an excellent tool for overcoming the challenges associated with in-memory computing.

A cloud environment allows organizations to gain access to large amounts of RAM on demand, rather than having to invest in such infrastructure on premises. This approach helps organizations to overcome cost barriers that might otherwise have made it impossible to run operations in memory.

A cloud environment can also help make in-memory storage more reliable by providing high availability and redundancy. If you build a cloud-based infrastructure composed of virtual machines with automatic failover or redundant hosts, then disruption of RAM storage on one system will not lead to data loss. This high availability would be more difficult to implement in an on-premises data center where system resources tend to be more constrained.

For these reasons, pairing the cloud with in-memory computing is an excellent way to take advantage of performance benefits that in-memory workloads provide without the cost or complication of running in-memory apps on premises.

How to run in-memory workloads in the cloud

If you want to perform in-memory computing in a cloud environment, the easiest way to start is by using an in-memory storage framework that is designed to work with the cloud.

One example is Apache Spark, an in-memory data analytics platform. Spark lets you process data at a much faster rate than you could achieve using magnetic disk storage.

While Spark can run on premises, a number of public cloud providers offer turnkey Spark solutions. Google Cloud provides Cloud Dataproc. Azure includes Spark on its HDInsight platform. Also, Amazon offers Spark as part of its Elastic MapReduce.

If you have a large volume of data to process quickly, then the above-mentioned, and similar Spark implementations, in a public cloud will help you do that without requiring a large investment in new infrastructure or system configuration. Of course, you could also run Spark on a private cloud by installing it yourself on a virtual server.

For in-memory computing, which can run on a private cloud, platforms like Red Hat JBoss Data Grid and IBM DB2 are potential solutions. They are not data analytics platforms, like Spark, but in-memory databases which could be used with analytics applications or any other type of workload that requires data storage with very fast read and write times. Platforms like these are also designed to ensure high data availability by replicating data throughout a cloud cluster in order to prevent data loss in the event that one server shuts down.

Conclusion

In-memory computing could be the difference between run of the mill app performance and blazing speed. While in-memory computing was once very expensive to obtain and difficult to configure, cloud-based in-memory computing platforms offer cost-efficient and convenient ways to overcome the bottleneck that slow hard disks can create on a traditional infrastructure.

Next Steps

How designing an app in multicloud can improve performance

Learn how to manage cloud costs

Reduce costs during and after deployment in a private cloud

This was last published in September 2016

Dig Deeper on Cloud application architectures and platforms

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How can in-memory computing in the cloud help small companies save money?
Cancel

-ADS BY GOOGLE

SearchAWS

TheServerSide

SearchFinancialApplications

SearchBusinessAnalytics

SearchCRM

SearchSalesforce

Close