From Traditional servers to software composable infrastructure: what is the difference?

From Traditional servers to software composable infrastructure: what is the difference?


Traditional, converged, hyperconverged and now software composable infrastructure: the time that a datacenter was a collection of simple servers is over. What is the difference between all these variations of data center architecture and which is ideal for you?

Software composable infrastructure is the latest hype in the rapidly changing world of data center architecture. Software-based infrastructure is exactly what the name suggests, but to understand what that means for your data center, your applications and the management of your environment, we start best at the beginning.

Traditional infrastructure

A data center is nothing more than a collection of hardware that an organization needs to run supporting applications and critical digital workloads. The legacy approach to building such a data center is still in use today but is losing popularity. Traditionally, an organization sets up hardware to run certain applications. IT invests in combinations of calculation servers, storage, and switches to meet what a specific application requires. The investment in that hardware has to last for three to four years, making it important for the IT department to buy enough capacity.

Under the denominator of traditional infrastructure, server stacks are assembled that meet the current needs of an organization, with then an extreme capacity margin, so that IT does not become a bottleneck in the period between CapEx investments. Overprovisioning is the technical term, and it means as much as ‘buying too much to be sure’.

Overprovisioning means “buying too much too much to be sure”.

Traditional infrastructure has the disadvantage that it is a complex composition of system resources. Upon purchase, it may appear that not all components cooperate as hoped and it is, in any case, a labor-intensive job to get and keep the entire architecture operational. In addition, the entire infrastructure was built to measure for applications that were relevant at the time of purchase. In three years time, however, much can change in that area.

Once operational, the traditional plan of approach is not necessarily bad. Overprovisioning and the high management costs aside, the traditional environment is capable of running all types of workloads, from physical applications on bare metal servers to virtualized environments to containers. Traditional infrastructure is mainly expensive and not very flexible.

TL; DR: For traditional infrastructure, IT itself combines servers, storage, network … to a somewhat flexible whole that meets specific needs, and by overprovisioning costs much more than is actually necessary.

Converged infrastructure

The solution for this quickly came in the form of converged infrastructure. Translating to Dutch does not make much sense here since the term is common in English throughout the industry. Hardware that is converged has been put together by vendors beforehand. They put together compute, memory and storage in ready-made boxes that you know work perfectly together. Converged infrastructure is so much clearer and much easier to manage.

The difference between converged and traditional is actually not that big. With a converged approach you leave the compilation of the infrastructure to a provider, which saves you a lot of headaches, but the final product does not differ so much from what you ideally had put together. Converged infrastructure is, in that respect, little more than ready-offered traditional infrastructure. The problem of overprovisioning and the associated costs remain, although you can save on the management part.

TL; DR: For converged infrastructure, the composition is outsourced to a third party, which makes purchasing and management easier, guarantees that everything cooperates nicely, but does little to overprovisioning and flexibility.

Hyperconverged infrastructure

Converged was followed up by hyperconverged. In a hyperconverged environment, you buy server nodes that again contain compute, memory and storage. The big difference is that a hyperconverged node is built for virtualization. The storage is disconnected from memory and compute, and is put into a pool that is accessible by different nodes. Hyperconverged solutions therefore only support containers and virtualized workloads. The approach is not compatible with bare metal applications.

A hyperconverged environment offers all the benefits of a converged or traditional infrastructure with Storage Area Network (SAN), without having to install and manage a complex SAN for it. This is particularly interesting for smaller organizations that want to run complex workloads on an advanced infrastructure, but for which the SAN approach actually goes too far.

Hyperconverged infrastructure is very easy to manage. The nodes can be maintained as a whole with their capacity and expandability is easier than ever. Do you have storage too short? Then stop a new node in your area and expand the capacity immediately. The new storage ends up in the available pool and is immediately available for your applications without requiring a lot of configuration work.

Hyperconverged hardware is still very similar to traditional servers but offers much more flexibility. Yet there are still important financial and practical limitations.

With hyperconverged infrastructure, the data center world took a big step towards unprecedented flexibility, but there are still disadvantages. For example, nodes always contain a combination of storage, compute and memory. If you want to buy a few extra terabytes of extra storage, you also pay for the additional computing power that you may not need. The need for overprovisioning decreases drastically because of the flexible extensibility, but we can not quite speak of capacity tailored to the workload. Management is simpler than ever, with a significant cost reduction as a result.

An additional disadvantage is of course that hyperconverged infrastructure owes its flexibility to the virtualization of all workloads. Virtualization is the norm today, but there are still a lot of exceptions. For bare metal workloads, you remain so dependent on traditional or converged infrastructure.

TL; DR: Hyperconverged infrastructure is a combination of nodes, where the storage is disconnected from compute and memory. The nodes can be combined together into a relatively flexible whole, but hyperconverged infrastructure only works on the basis of virtualisation and still does not allow you to purchase real system resources.

Software composable infrastructure

Software composable infrastructure is the latest evolution in data center country. You can see it as the successor of hyperconverged infrastructure, where the flexibility is further increased, and the disadvantages are eliminated.

Software composable infrastructure completely eliminates the obligation to buy nodes that contain resources that you may not need. A software composable infrastructure provides a framework in which you can add computing power, memory and storage as you wish, independently of each other.

The three main system resources end up in one large pool, which is managed by software. The software, in this context called composer , essentially compiles servers from the available system resources, completely tailored to the needs of the applications. To use a different term: a  software composable infrastructure can safely call you modular infrastructure, where you can add system resources in flexible modules.

Software composable infrastructure can safely be called modular infrastructure.

Software composite infrastructure consists of one management layer, usually provided with a powerful API. Through this API, developers can let their software talk to the management layer. This makes it possible for workloads to automatically request more system resources when they are needed.

Adding capacity can be completely flexible. If you want more storage, add a few HDDs. Is the computing power on the low side? Then click a processor node in the framework. Is the RAM capacity at its limits: the same story. Overprovisioning thus becomes completely unnecessary, which entails an immense potential cost saving.

Software composite infrastructure is not the same as a mountain of system resources on which workloads are virtualized. Via the (usually web-based) management console it is perfectly possible to allocate the equivalent of a physical server for bare metal workloads.

TL; DR: Software composite infrastructure consists of a pool of system resources to which you can add storage, memory and calculation power, independently of one another. Software distributes the system resources over applications. In this way you invest exactly in what you need and you expand the capacity without having to pay for resources that you do not need.

In practice

What does software composable infrastructure look like now? Think of large dropped nodes with a lot of slots. These slots allow you to scale computing power, RAM and storage as you please. For example, if we look at HPE, we arrive at the line-up of Synergy devices. Synergy devices are nodes with a certain capacity for the three most important system resources, which you can then scale at will. Dell attacks Synergy for its part with the PowerEdge MX offering, which offers aimilar functionality. Even smaller partners such as DriveScale offer total solutions, where software, storage and memory are added to a pool as three independent sources.

HPE’s Synergy devices offer a large software-supported flexibility.

The approach is seen as the next logical evolution in the modernization of the data center. Where traditional IT worked quite well with the more static workloads of the past, software composable infrastructure provides unprecedented flexibility and agility tailored to the rapidly changing workloads of today. If you opt for software composable, you get the scalability of the cloud to your on-premises data center.

Efficient and smart assignment

The infrastructure enables new and efficient ways of working. Think of a database that is on some SSDs and sometimes used by an application. As soon as an app needs the data, the composer creates a system whereby the required computing power and the necessary memory are linked to the SSD, with the database on it. After executing the workload, the dataset is disconnected and the processing power, possibly with more or less memory, can be used to analyze another dataset on an HDD in the pool. Such efficiency drastically reduces the cost of your data center.

Software composite infrastructure is the future of the flexible data center. In the future you will see more and more solutions where the modular aspect becomes increasingly flexible. If it is time for a new investment in your data center, you can achieve more with fewer resources by embracing the software approach.

This was translated, find the original content in it’s originally published language here.

About the Author:

DriveScale instantly turns any data center into an elastic bare-metal cloud with on-demand instances of compute, GPU and storage, including NVMe over Fabrics, to deliver the exact resources a workload needs, and to expand, reduce or replace resources on the fly. With DriveScale, high-performance, Kubernetes clusters deploy in seconds for machine learning, advanced analytics and cloud-native applications at a fraction of the cost of the public cloud.

Leave A Comment