Spend less on infrastructure and operations
Achieve higher utilization of compute and storage assets
Avoid under or over-provisioning of resources
De-couple compute and storage upgrades
Reduce data center footprint
Add only the resource you need: compute or storage, not both
Operate your data center with the agility of a public cloud
Respond quickly to changing workloads
Spin-up clusters in minutes instead of weeks
Share resources across clusters and applications; no more silos
Recompose servers and clusters on-demand via software
Recover quickly from disk and server failures
Deploy with Confidence
Integrates quickly and easily into your environment
No changes required to the application stack
Works with industry standard servers and storage of your choice
Performance equivalent to bare metal servers with direct attached storage
Redundant architecture for high availability
Data security with encryption at rest and in flight
The DriveScale System
DriveScale brings the benefits of hyperscale computing, originally developed by companies like Google and Amazon for their own use, to enterprise data centers running Hadoop and other big data workloads. Our “Software Composable Infrastructure” technology transforms rigid data centers into flexible and responsive scale-out deployments.
With DriveScale, IT teams can deploy independent pools of commodity compute and storage resources, and easily combine and recombine those resources into software-defined physical nodes and clusters that are indistinguishable from standard servers to the software running on them. Enterprises can respond faster to changing application environments, maximize the efficiency of their assets, and save on equipment and operating expenses.
Modern Workloads Require a New Approach
The advent of the big data era has led to the need for innovations in how information is processed. Whereas individual physical or virtual servers were more than adequate to handle traditional business applications like ERP, modern workloads designed for today’s social, mobile, analytics and IoT applications require a new approach that distributes the data, and the processing of that data, across dozens to thousands of commodity servers.
This “scale-out”, distributed computing approach, originally employed by hyperscale companies like Google and Amazon, is now the standard platform for modern workloads such as Hadoop, NoSQL and containerized web services.
What is Software Composable Infrastructure?
While scale-out architectures were developed specifically to handle modern workloads, they are inherently inflexible when considering the ratio of compute to storage. Each node or cluster has a fixed ratio of compute to storage, so as application demand changes or data grows, organizations have either had to re-configure their clusters or over-provision them to meet these dynamic loads. The result is excessive spending, underutilized resources and slow response to changing requirements.
Software Composable Infrastructure (SCI) is a next generation data center architecture developed to directly address these issues. SCI disaggregates standard servers into pools of compute and storage resources consisting of disk-lite servers and JBODs (Just a Bunch Of Disks). IT operators can compose servers and clusters on-the-fly under software control. For the first time, data center resources can be easily and quickly deployed where needed, even between different clusters.
Beyond Hyperconverged: Software Composable
Steven Hill of 451 Research and Tom Lyon of DriveScale
March 1, 2018 12:00pm PT