Often storage is planned one workload or deployment at a time, creating silos of data and operational inefficiencies. A new model for storage is emerging that will eliminate this problem, and it combines three key technologies: NVMe™, NVMe over Fabrics (NVMe-oF™), and Composable Infrastructure.
NVMe is a protocol to access storage on a PCI Express bus. SSDs with NVMe interfaces support 3X more IOPS than 12Gbps SAS SSDs because the NVMe protocol is massively parallelized (SAS protocol has just one queue versus 64,000 with NVMe) taking advantage of parallel processing capabilities of multi-core processors and multiple PCIe lanes that can be accessed in parallel. According to Eric Burgener, storage research director at IDC, in IDC’s NVMe in Enterprise Storage Systems report, enterprise NVMe SSDs displaced SAS SSDs this year and will reach almost $10B in revenue in 2020.
NVMe is designed for both high bandwidth and low latency according to the NVMe industry association NVMexpress®. To get equivalent IOPS to NVMe drives, you would need 3-5X more SAS SSDs or 10-20X more SATA SSDs which unnecessarily uses data center footprint and significantly increases costs in power and cooling. NVMe’s power efficiency and small form factor as well as it’s much lower $/IOPS is driving broad NVMe adoption across data centers.
“enterprise NVMe SSDs displaced SAS SSDs this year and will reach almost $10B in revenue in 2020.” According to Eric Burgener , storage research director at IDC.
While there are many benefits to deploying NVMe flash technology, the most important value-add is NVMe over fabrics (NVMe-oF™). NVMe-oF enables servers to access NVMe drives over standard Ethernet (Infiniband and Fibre Channel, as well) by creating a data fabric using RoCE (RDMA over Converged Ethernet) or TCP.
RoCE is designed for low-latency, and Mellanox supports RoCEv2 on its NICs. NVMe/TCP is newer but doesn’t require a specialized fabric. In addition, TCP is a well understood, high scale network protocol. If an application isn’t extremely sensitive to latency – and most aren’t, then TCP will be the most cost-effective and simple solution.
Accessing storage over a network isn’t new, however it has substantially changed from legacy, proprietary enterprise storage arrays and SANs. Today, there are industry-standard, commodity priced all-NVMe arrays. This makes it possible to easily deploy low-cost NVMe storage systems from multiple vendors using an orchestration application. Eliminating the complexity of deploying storage, software designed to discover resources, create secure bindings of drives to compute, and add, reduce or replace resources on the fly enables a highly efficient way to deploy storage across multiple workloads. That’s how composable infrastructure works.
DriveScale makes NVMe and NVMe-oF an easy choice. By eliminating the complexity of deploying heterogeneous, commodity storage while increasing availability and maintaining the performance of local drives, DriveScale makes it simple and cost-effective to deploy NVMe storage while providing enormous flexibility including your choice of data fabric (NVMe/TCP, NVMe/RoCEv2, NVMe/iSCSI). DriveScale fully automates the creation of the NVMe-oF data fabric and attaches (composes) the SSDs or slices of SSDs to compute or GPU nodes. In an instant you are ready to deploy a workload. And you can add, remove or replace compute or storage resources on the fly.
Today, NVMe-oF is being deployed for AI and machine learning, SQL and NoSQL databases, and big data analytics. These data-intensive applications demand high I/O per second (IOPS) throughput at scale driving the need for NVMe-oF. DriveScale eliminates the complexity of deploying elastic infrastructure and NVMe-oF on premises providing a seamless experience end-to-end.