Question & Answer

/Question & Answer
Question & Answer2018-11-14T22:05:25+00:00
Modern applications are designed for big data, AI, machine learning, social media and other hyperscale workloads. They store large amounts of data and use large amounts of compute power. Modern applications are built on distributed computing clusters that can be as large as tens of 1,000s of servers. Some examples of modern applications are those based on Hadoop, NoSQL, Object Stores and containers. Sometimes modern applications are called “scale-out” applications because they expand easily in a horizontal fashion by just adding more servers to support usage growth. Modern applications we are familiar with include AdTech, Webscale B2C transaction systems and financial analysis systems.
The standard infrastructure for these modern apps is a cluster of commodity servers with direct attached storage, either hard disks or flash drives, interconnected with 10 Gb or 25/100 Gb Ethernet switches. Very large distributed applications are run on 100s or 1000s of identical servers to form a cluster. Data centers often have different silos of clusters tuned to each type of modern application. Since these applications have very large scale, cost control is always a key issue. The lowest cost way to buy lots of compute and storage is to use commodity servers and storage.
Modern applications require different amounts of compute, storage and network capacity, but today’s IT infrastructure is built with a limited number of server configurations containing pre-packaged ratios of compute and storage. Since compute to storage ratios are decided at purchase time, and are fixed for the life of the server, enterprises tend to over-provision compute or storage to ensure they don’t run out of resources during the lifetime of the server.

  • Often extra drives are ordered with eachserver to make sure there is enough I/Obandwidth and capacity to deal withgrowth over the 3-4 year life of the server.
  • Sometimes, compute is over-provisioned inhigh data storage applications as only somany drives can fit in a typical commodityserver. When large storage capacity isneeded, but access or processing of thedata is infrequent, processors are idlenursemaiding the drives in their “care.”
  • In either case, utilization can be as little as10% of capacity.

What’s needed is the ability to choose the amount of compute and storage, independently and dynamically, for each application or workload with the ability to change and adjust over time. These choices need to be enabled on-demand and driven by software intelligence.

Mega public cloud providers, such as Amazon and Google, developed an architecture for hyperscale modern applications that provides different amounts of compute and storage, along with needed network capacity, on demand to serve the needs of their customers. This agile infrastructure enables public cloud customers to use and pay for only what they need, and at the same time, allows public cloud providers to easily reclaim the infrastructure for re-use from one customer to the next. This infrastructure re-use, along with commodity servers and storage, saves costs for the public cloud provider.

This same architecture approach is needed inside data centers in a private cloud environment to bring high performance, agility and low cost to those applications that will remain inside the data center. Composable Infrastructure was developed using cloud infrastructure constructs to enable data centers to operate their own infrastructure like a cloud service provider.

Composable Infrastructure was designed to meet the needs of hyperscale, modern applications. It is an approach to data center design that breaks IT infrastructure up into compute nodes, storage and a high speed fabric/network. You purchase commodity, diskless servers (compute nodes) as the platform for your applications, and at the same time, you purchase lower cost, commodity storage – either disk drives or flash drives. Then, under
software orchestration (single button deployment), you dynamically configure the compute nodes with just the right amount and type of storage to fit the application’s service level requirement, either inside a graphical user interface or via an API.

This architecture s implifies data center deployment by allowing IT to deploy the same commodity servers and commodity storage for any application, but to deploy these resources in just the right ratio needed per application or workload. This approach provides extreme agility and flexibility, while eliminating the need for overprovisioning and dramatically lowering infrastructure cost.

Composable Infrsastructure has many advantages including high performance, extreme scale, dynamic configuration, cloud-like agility, hardware independence, resiliency and operational optimization. Importantly, it breaks down applications silos without the need to modify the applications themselves. Composable infrastructure is transparent to the applications it supports.

IT organizations find their most impactful benefits are in deployment speed, agility and cost reductions.

Just-in-Time Capacity Planning

  • Easily deploy the optimal compute to storage ratio per workload
  • Eliminate costly over-provisioning and under-utilization
  • Adapt infrastructure in real time as workload demands change

Quick Provisioning

  • Provision compute, storage and network bandwidth in an automated fashion via GUI or API
  • Create clusters in seconds and auto configure network fabric on-the-fly
  • Scale compute and storage resources up and down as needed
  • Replace failed elements instantly with no performance impact

Optimized Resource Usage

  • Avoid static, inflexible, appliance-based configurations
  • Eliminate over-capacity and wasted, idle resources
  • Decouple compute and storage upgrade cycles to maximize the lifetime value of each resource
Hyper-converged-infrastructure (HCI) was designed for a different type of application. The hyperscale, modern applications we have been discussing run single, large, partitioned applications on 100s or 1000s of servers in a cluster. HCI, by contrast, was designed to run many applications on each server. HCI has been deployed to improve the manageability and performance of traditional applications such as desktop virtualization, SQL databases and ERP systems. These applications have known data processing and scaling characteristics. Modern applications require extreme performance and scale combined with agility and dynamic configuration. Composable Infrastructure was designed for these hyperscale applications.
Storage requirements in many enterprise deployments have traditionally been met with a storage area network (SAN) or network-attached storage (NAS). The advantages of SAN/NAS storage include dedicated networks to carry storage traffic and advanced features such as RAID, snapshots and backups. These separate storage systems are expensive and require extra staff to maintain. Modern applications don’t need these advanced storage features since data resilience and duplication are handled at the application level.

Modern application clusters typically use direct-attached storage (DAS) in each server making up a cluster, instead of NAS/SAN. This is a far less expensive way to provide storage. Modern applications have taken on the responsibility for data resiliency, snapshots, backups, etc., so that these low cost commodity platforms can be used.

Composable Infrastructure provides the best of both worlds, allowing the flexibility of NAS with the performance and low cost of DAS via commodity drives. In fact, because composablility eliminates the need for over-provisioning and provides for separate lifecycle management for compute and storage, it is the least expensive of the three approaches.

Your cloud strategy no doubt includes identifying which applications will move to a public cloud and which will remain on premises. You may need to keep applications on premises for security, policy, performance or cost reasons. Composable Infrastructure allows enterprises to build web-scale infrastructure too by applying cloud infrastructure technologies inside the data center. Since the architecture for composable design came from mega public cloud providers like Amazon and Google, it allows you to run your data center with the agility and economics of a public cloud, enabling IT to be a service provider of excellence to their LOB internal customers.
DriveScale’s Composable Platform provides a highly secure solution that enables IT to easily deploy modern applications with the right amount of compute, storage and network capacity. DriveScale’s design is intended to create many 100s and 1000s of custom configured compute nodes to build clusters in a massively parallel structure.

The DriveScale Composer is the heart of the DriveScale solution and offers a single pane of glass to very simply compose custom clusters for a variety of application deployments. Usage tracking and health monitoring provide the information needed to make the most efficient use of your composable infrastructure. Node and cluster templates are created to define the custom servers and clusters designed for each workload.

DriveScale brings storage into its composable model by attaching to either HDD storage deployed in JBODs (Just a Bunch of Disks) or Flash storage deployed in eBOFs (Ethernet attached Bunch of Flash). The JBODs and eBOFs are attached to high-speed Ethernet via Adapters made by DriveScale and third-party partners. Lightweight agents are deployed on servers which receive drive assignments from the Composer to create server nodes with which clusters are composed.

The DriveScale Composer keeps the inventory of all compute, storage and networking resources. It creates and breaks bindings amongst the resources to compose and dissolve infrastructure on demand. DriveScale uses public key encryption between servers and storage drives for maximum security. Data is also encrypted at rest and on the wire.

DriveScale’s Composable Platform can be added to your data center without changing any existing infrastructure. Simply select the modern application you want to start with and follow the ‘How To” recipe below. Application stacks are unaware of the composed infrastructure running beneath them and require no changes at all.

How To Get Started with DriveScale

Purchase Commodity Servers & Storage

  • Purchase diskless, commodity, bare metal servers per compute needs
  • Purchase commodity HDD storage deployed in JBODS or Flash storage deployed in eBOFs per storage needs
  • Specify storage as a separate instance through JBODs or eBOFs to disaggregate storage from compute.

Build Pools of Resources

  • Next install either the DriveScale NVMe or SAS Adapter to connect storage to the network.
  • Deploy lightweight DriveScale agents on the compute nodes to allow them to bind to storage and be composed into secure virtual clusters

Compose Your Infrastructure

  • Now provision custom clusters using the DriveScale Composer
  • Create custom clusters that correspond to the needs of each modern application/workload and adjust dynamically as needed

You’re Now Composing Infrastructure Dynamically