Blog

hero-resources.png

The Recipe for Next-Generation Data Services

Jerome McFarland July 01, 2016 blog

"640K ought to be enough for anybody."

"There is no reason anyone would want a computer in their home."

“Cellular phones will absolutely not replace local wire systems.”

These unfortunate, oft-cited predictions highlight a recurring theme throughout the history of technology advancement. Namely, that the human appetite for broader, faster access to information should never be underestimated. Many of the most disruptive innovations ultimately service our insatiable desire for more/better/faster data accessible anywhere anytime. This hunger for data has never been more prevalent than it is today…particularly for enterprise applications and the business processes that depend on them.

Today, enterprise operating systems and applications are continually evolving to provide new and better services to their users. With very few exceptions, these services rely on the ability to effectively access, process, analyze, and share data. The growth of virtualization, cloud computing, and big data analytics exemplify this trend. As a result, the pressure on enterprise IT, especially data storage and access, has increased to unprecedented levels. Modern storage infrastructure must support complex, dynamic workflows involving numerous integrated applications with data spanning multiple on-premises and off-premises/cloud environments. And it isn’t just about infrastructure – it’s about enabling business process agility by delivering unrestricted, self-service data access to applications and to the analytics teams that drive them.

Consequently, when applied to storage and data services, assumptions of “large enough” or “fast enough” typically don’t hold true for very long. Traditional storage infrastructure and static data silos have become increasingly inadequate. We must now move beyond those inflexible approaches and re-imagine how to fluidly store and access data, with consolidated application and workflow intelligence, dynamically and transparently across on-premises and cloud infrastructures.

How? An effective solution requires several ingredients:

  • Storage media capable of consolidating diverse workloads with cost-effective performance, capacity, and endurance
  • Flexible, software-defined deployment and management
  • Application intelligence and self-service workflows
  • Seamless, elastic unification of varying data types, environments, and geographic locations

So let’s start with the first ingredient: flash memory. Once viewed as a prohibitively expensive, niche technology, the barriers to its widespread adoption have evaporated more quickly than most expected. Device-level concerns around capacity and endurance have been addressed by innovation in both hardware (e.g. lithography shrinks, 3D stacking) and software (e.g. intelligent wear-leveling, advanced error correction). In parallel, the flash costs have steadily decreased. Flash is now becoming ubiquitous, with the latest cost-benefit analyses now pointing clearly towards solid state drives (SSDs) as the foundational hardware for modern storage infrastructure. The stage is set for next-generation, flash-centric architectures.

Improved SSD economics and efficiency are only part of the solution, however. To fully leverage the benefits of flash (low latency, high throughput, efficient random access), the layers between the hardware and the application must also evolve. This evolution has already begun. For PCIe-based SSDs, the NVMe protocol has removed the bottlenecks inherent to legacy storage drivers (created for spinning disk). Designed from the ground up to support high-performance storage, NVMe is bringing true flash performance one step closer to those hungry users and applications.

So what’s next?

Well, we at Elastifile see cloud-scale file systems, distributed storage, and transparent access across hybrid clouds as crucial areas that have lagged behind as flash has become mainstream. We believe that these areas are now ripe for impactful innovation. And, more importantly, we’re doing something about it.

And what about the other ingredients?

We’ve got plans for those also. Please join us in the coming weeks as we explain our vision for the next generation of software-defined data services. The best is yet to come.