Skip to content
NVMe Storage

It goes without saying that not all storage is created equal. But how do you assess the potential benefit of the latest fast and reliable storage to your organisation, and especially if it’s being deployed by a managed service provider? In other words, what’s the particular advantage of choosing a provider that is using super-fast, ultra-reliable storage against those that don’t?

To get to the bottom of this, let’s briefly take a step back and consider the bigger picture. Many storage standards and practices that are widely in use today, like PCIe leaning on SAS and SATA, were developed for well-established but, on today’s terms, slow technologies like hard disk drives. This slowness is a function of the limits of spinning-disk technology and of the typical available computing power when these protocols were established, so it’s hardly surprising that newer protocols are many times faster.

Consider this: while legacy SAS and SATA can only support single I/O queues and each can have 254 and 32 entries respectively, that compares with today’s best standards, like NVMe, which can support multiple I/O queues – up to 64,000 – with each queue having 64,000 entries.

In other words, we are talking about an architecture that allows applications to start, execute, and finish multiple I/O requests simultaneously and use the underlying media in the most efficient way to maximise speed and minimise latencies. In this context, spinning disks are simply old hat.

New call-to-action

For those companies tapping into this kind of efficiency via private cloud, the speeds are game-changing, delivering rigorous application workloads with a smaller infrastructure footprint at a far lower comparative cost when taking account of performance.

So what is NVMe, which makes this possible? Non-volatile memory express was developed specially for solid-state drives by a consortium of suppliers including Intel, Samsung, Sandisk, Dell and Seagate. Benchmark tests show it sustains throughput that’s 15 times faster than HDD and even five or six times faster than SATA SSD.

Of course, hard drives aren’t history yet. Datacentres the world over are still full of them. In certain contexts, they still offer tremendous low-cost storage capacity and are wonderful for less-used data. But it’s also true to say that when serious, high-performance computing is on the agenda, or for core applications in constant use, they aren’t really fit for purpose in 2019.

What kinds of compute-intensive enterprise operations and functions demand fast data, and therefore fast and versatile NVMe storage? Here are a few familiar examples:

  • OLTP relationship database platforms with intensive data-processing workloads
  • Where an enterprise is backing up or replicating of data to meet near-real-time compliance obligations is a requirement
  • High-performance computing (HPC) research initiatives demanding speed and performance for faster completion of jobs and improved use of clusters
  • Drop-in acceleration for certain workloads to boost struggling servers
  • Where managing virtualisation clusters is needed, as NVMe makes this easier; it is capable of supporting multi-tenant applications, databases and heterogeneous workloads

The overarching point here, whatever examples we pick out, is that real-time analysis and response is becoming crucial for more and more businesses. The analyst group IDC said recently, for example, that by next year (2020) about two-thirds of the world’s largest global companies will have at least one mission-critical workload that uses real-time big-data analytics. That gives you a sense of the scale of take-up for big data.

But for now the analysts also agree that adoption of the best storage is not keeping pace with the big-data vision – which makes private cloud one instantaneous way to bridge the gap and even get ahead.

In particular, first-class private cloud has extra appeal today because the best enterprise-grade storage delivers denser, mixed enterprise workload consolidation. In a mixed workload environment, traditional database applications and modern web-scale applications share the same infrastructure, which is a step up from just buying in high performance for a single, dedicated workload – a tactic that some have adopted in recent years, but which is now being left behind as big data becomes pervasive.

Question?
Our specialists have the answer