D is for DriveScale

D is for DriveScale

2020-05-08T07:43:37-07:00

In the much anticipated series finale of this four- part series we bring you the DriveScale Solution. Did you miss the first three parts? Catch up: Part 1Part 2 and Part 3.

D is for DriveScale. (To the left is the American Sign Language (ASL) representation for the letter D).

DriveScale’s solution orchestrates disaggregated compute, GPU and storage providing customized instances on-demand that are optimized for each workload. Our focus is on data-intensive applications, whether bare metal or containerized under the control of Kubernetes, where managing a disaggregated data center at scale is critical.

Let’s talk about why this is important.

As described in the previous article, scale-out computing is the ubiquitous architecture for data-intensive applications. Whether deployed on bare metal instances in the data center, in lightweight containers, or in the cloud – it is the scale-out architecture that dominates data-intensive applications deployment today.

As originally conceived, scale-out computing used Commodity-Off-The-Shelf hardware, which combined both compute and storage (local disk), deployed across standard Ethernet networks. Servers with local storage. Scale-out allowed a partitioned application to run on a large group of these “low cost” servers. The low cost was in comparison to more expensive scale-up servers attached to traditional “enterprise” storage, often over a Fibre Channel SAN.

Many at first thought only a few applications would lend themselves to partitioning, an approach of dividing up the work amongst a set of compute nodes such that each worked on its own distinct part of the problem and later rolling up a unified result – this point was the key to enabling low cost, unshared local storage to be used for large scale distributed applications.

As time went on, the number of applications that could be written in a partitioned, scale-out fashion became the norm rather than the exception.

Problems arose as the number of distinct scale-out applications increased in the data center. Each application was best served by an optimized SKU combining an optimal amount of storage of the right type (spinning disk and later solid-state) with the right amount of compute as the basic unit of node scale-out. Each application had a different requirement resulting in a proliferation of unique hardware platforms for each application that was ill-suited for use for other apps. Predicting the growth of any one application was difficult, and this led to overprovisioning and underutilized resources.

But reintroducing an “enterprise” networked storage layer to separate out storage and compute again simply was not an affordable or performance capable approach.

Disaggregation of high volume, industry-standard storage (HDDs and SSDs) over now ubiquitous high-speed Ethernet, separating the compute and later GPU onto the fabric, returns the flexibility of separation of resources that are composed on-demand as needed from a smaller set of common SKUs. Ethernet fabrics now with end-to-end NVME capability provide high performance and ease of deployment compared to any other approach.

Composable Infrastructure is applicable to all applications in a data center and provides a path for increasing efficiency and reducing costs.

Disaggregated Compute and Storage

DriveScale is deeply integrating our technology with Kubernetes because we see that increasingly data center architects are looking at the problem of how to orchestrate their sprawling scale-out applications in a common framework that allows for simplified and unified automation of their data center.

In our view, Kubernetes provides the common logical orchestration for scale-out applications, while DriveScale’s orchestrator provides dynamic high-performance provisioning of disaggregated physical assets, and provides key insights into the health and performance of the physical infrastructure underlying Kubernetes. Dynamic provisioning is automated and policy driven to help eliminate operator error and free up system administrator time for more important tasks.

Industry support for disaggregation and composition is increasing. SmartNICs emergence cement the industry direction towards composable infrastructure, providing useful platform abstraction for large service providers with increased security. NVME over Ethernet will soon be the norm. Hardware vendors are on board with composable but end users are still learning best practices for deployment.

DriveScale is here to help you succeed on that journey.

About the Author:

Brian Pawlowski has years of experience in building technologies and leading teams in high-growth environments at global technology companies. He is currently the CTO at DriveScale. Previously, Brian served as Vice President and Chief Architect at Pure Storage, focused on improving the user experience for the all-flash storage platform provider’s rapidly growing customer base. As CTO at storage pioneer NetApp, Brian led the first SAN product effort, founded NetApp labs to collaborate with universities and research centers on new directions in data center architectures.

Leave A Comment