Eclipse Digital - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Podcast: Rely on economical software-defined hyperscale storage

Listen to this podcast

RAID is so yesterday. Hyperscale software-defined storage is a modern model that unites disparate storage resources and recovers unused capacity that accrues over time.

Storage is wasteful. Over time, inefficiencies in the way storage is allocated and reclaimed for reuse, though minute, have a way of adding up. It's tantamount to throwing money down the drain. That wastefulness can become unmanageable and costly as multiple storage systems in multiple infrastructures, both on premises and in the cloud, accrue and remain unaware of each other. Mark Lewis, chairman and CEO of Formation Data Systems, is on a mission to find and reclaim idle storage.

In this podcast, Lewis explains Formation's approach to solving the predicament by consolidating network resources into a single unified environment, which presents an opportunity to tackle underused server virtualization. Call it hyperscale software-defined storage.

"The master problem we set out to solve was to give classic IT shops the ability to have low-cost, high TCO value, very agile storage that looks like cloud storage," Lewis says. The problem with storage is that while other technologies have advanced to the point of being indistinguishable from their predecessors, storage is still predicated on a RAID model that has changed little in more than two decades. "Storage folks are very conservative, [but] we think it's time for it to change."

RAID storage was built on a tightly coupled architecture of dual controllers, mirrored cache and custom hardware that had to be reliable. "If they crash, it's a very bad thing," he says. The so-called Google model for storage, Lewis explains, is built on components that are expected to be unreliable, but which are replicated in a manner that ultimately yields a robust system. "Instead of an airplane, it's a bunch of old pickup trucks and you don't care if one dies, because you have 20 others." It's this architecture that keeps costs down, more so than Google getting a more-favorable price on disk drives. "It's that Google, or Amazon, or Citibank is using that technology more effectively."

Customers wanted more than this software-defined storage approach, noting that they often had multiple VMware systems with extra disk drives and old attached storage arrays that often each had small amounts of unused capacity. "That's how Formation VSR (Virtual Storage Recapture) was born," Lewis says.

Software-defined storage will change your storage environment and platform.
Mark Lewischairman and CEO, Formation Data Systems

VSR takes existing unused storage sitting on servers or in attached arrays and pools it and adds resiliency that allows to be repurposed for use.

The problem is hardly new; even going back to the early 1980s with MS-DOS, inefficient use of cluster and allocation of sectors, especially with smaller files, could lead to significant portions of a drive becoming unavailable for other use. Evidently, little has changed over the decades.

"It's been hard in storage in particular because there has really been an incredible focus from the vendor community on hardware-based arrays," Lewis says. "That's a very static approach to storage."

One issue, Lewis says is that these arrays, whether network-attached storage or a storage area network, tend to become siloed and dedicated to doing one job. "Good for vendors, bad for companies," he says. By going with software-defined storage, almost any industry-standard hardware becomes eligible for capacity reclamation.

One key use for software-defined hyperscale storage is in major scale-out implementations on the massive scale typified by Google and Amazon Web Services (AWS). With classic array-based storage, achieving the necessary scale would be cost-prohibitive, Lewis says.

Existing enterprises that must deal with large amounts of legacy data assets face challenges not seen by newer cloud-only startups. The key to success with storage, Lewis says, is to do strategic things tactically. "Software-defined storage will change your storage environment and platform." But simply wheeling out legacy storage arrays in favor of software-defined storage is not the way to go. "Find a low-importance project or a backup project and start there."

In stepping up to a storage model that's future-ready, Lewis offers several tips:

  • Focus on the total ownership of the project.
  • Start with a small storage project that is not mission-critical.
  • Deploy and then eventually work up to the software-defined hyperscale storage models that are now used by the likes of Google, AWS, Facebook and others.

Google, Lewis says, has relied on software-defined hyperscale storage for more than a decade, rendering it well-proven. Regardless of which vendor a business ultimately chooses, the time has come to adopt a modern storage model that improves efficiency and cuts costs through recovery and reallocation of storage capacity that unknowingly lies unused and wasted.

Next Steps

Does software-defined storage really matter?

Tips to avoid wasted storage capacity

What are the traps that lead to wasting storage capacity?

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What is your strategy for dealing with legacy storage?
Cancel

-ADS BY GOOGLE

SearchAWS

TheServerSide.com

SearchFinancialApplications

SearchBusinessAnalytics

SearchCRM

SearchSalesforce

DevOpsAgenda

Close