Modern workloads such as Hadoop, Kafka and machine learning are demanding in terms of the volume of data that must be processed, the speed at which that data much be processed, and the fact that their capacity and performance requirements are both variable and unpredictable. They require a blend of performance, adaptable capacity, flexibility in configuration and cost-efficiency obtained through resource utilization that legacy direct attached and shared storage architectures simply cannot deliver.
As Storage Switzerland previously blogged, the intersection of non-volatile memory express (NVMe) over Fabrics (NVMe-oF) and a composable architecture stands to drive down costs and increase flexibility while at the same time providing levels of performance that meet the needs of the enterprise’s most demanding workloads. Composable architectures that support a variety of network options such as iSCSI, NVMe over TCP and NVMe over RDMA provide flexibility for users to optimize the network design for cost, performance or latency. In this installment, we will assess the DriveScale Composable Platform from this vantage point.
Composable infrastructures abstract the resources of physical systems into common pools, from which virtual systems, tailored to workload-specific requirements, may be spun up and down, on the fly. Composable infrastructure facilitates greater agility in terms of spinning up application-specific systems. Because resources are pooled, it also stands to improve compute, storage memory, and storage capacity utilization alike, setting the stage to cost effectively accelerate performance. The problem is that many composable infrastructure solutions are limited to a specific chassis or proprietary switch – they cannot scale seamlessly across racks.
Read the full article here.
Leave A Comment