Hardware has always defined the data center. You looked into a big room and there stood aisle after aisle of servers, storage, and networking equipment. Along the walls were massive cooling systems and power management hardware, such as switches, batteries, and more.Â
For resilience and disaster recovery (DR), the solution was simple. You build a mirror image of that data center at another location: Buy a new set of that equipment and install it in the new center. Of course, there was plenty of software too. But the hardware defined the existence of the data center.Â
But that may be changing as the software-defined movement gathers momentum. The basic idea is to decouple the software from the underlying hardware. Instead of a vendor building a storage area network (SAN) array with proprietary software that only runs on that system or another vendor building a switch with secret-sauce software inside, the idea is to have the software able to run on any hardware. There are so many software-defined elements that people are now talking about entire software-defined data centers (SDDCs).Â
Here are the top trends in software-defined data center market:Â
Software-defined flashÂ
We’ve had software-defined storage (SDS), software-defined compute, and software-defined networking (SDN). And now we have software-defined flash.Â
To reach efficiency at scale, hyperscale cloud and data center storage needs more from flash storage devices that are currently based on hard disk drive (HDD) protocols. The Linux Foundation’s Software-Enabled Flash Community Project, therefore, has evolved a software-defined flash API. Developers can use it to customize flash storage specific to data center, application, and workload requirements.Â
Kioxia, for example, introduced software-defined technology and sample hardware based on PCIe and NVMe technology. This technology uncouples flash storage from legacy HDD protocols, allowing flash to realize its full capability and potential as a storage media.Â
“Software-enabled flash technology fundamentally redefines the relationship between the host and solid-state storage,” said Eric Ries, SVP, memory storage strategy division, Kioxia America.Â
OrchestrationÂ
Magic begins to happen once you decouple physical servers from the software they host, storage arrays from the many types of software they can deploy, and networking software from the underlying switches, routers, and other networking gear.Â
But so, too, does complexity emerge. What is needed is a way to orchestrate the many elements, so the data center “symphony” of elements is all playing in the same key, keeping time, and following what the conductor requires.Â
“With the increased complexity and scale of data centers, the industry must move beyond automating the configuration of infrastructure and workloads to a new paradigm built around orchestration,” said Rick Taylor, CTO, Ori.Â
“We must think about the desired state of services and leverage smart software to plan and deploy instances and their connectivity.”Â
Already there?Â
Many think that the software-defined data center (SDDC) is gradually emerging.Â
But Ugur Tigli, CTO at MinIO, believes we are already there due to containerization and especially as a result of Kubernetes.
“The modern data center is already software-defined and the colossal success of Kubernetes only ensures that it will remain that way,” Tigli said.Â
“With software-defined infrastructure, you gain the ability to dynamically provision, operate, and maintain applications and services. Once infrastructure is virtualized and software-defined, automation becomes a force multiplier and the only way to achieve elasticity and scalability.”  Â
Appliance relianceÂ
Appliances have sprung up over the past two decades to take care of a multitude of data center functions.Â
They are used for deduplication, compression, backup, and a host of other uses. There are even massive appliances from the likes of Oracle that package all the compute, networking, and storage hardware in a box along with Oracle software and databases – all tuned and optimized to be the environment for that application or database.Â
But there is a problem. These appliances tend to go against the software-defined paradigm. They generally have proprietary software inside. Yet, data centers, and IT in general, are riddled with them, as they have worked so well.Â
“There is a major challenge that existing infrastructure vendors face – you can’t containerize an appliance,” said Tigli with MinIO.Â
“Every appliance maker is frantically trying to separate their software from their hardware, because the cloud-native data center is an extinction event for them.” Â
You will still need CPU, networks, and drives, Tigli said, but everything else is software and that software needs to run on anything.Â
Look at the cloud today, the diversity of CPU options includes Intel, AMD, Nvidia, TPU and Graviton, to name a few. Even private clouds present considerable diversity with commodity hardware from Supermicro, Dell Technologies, HPE, Seagate and Western Digital offering different options and price and performance configurations.Â
“The result is that we live in a data center world that is software-defined and increasingly open,” Tigli said.Â
“Only through open-source software can the developer achieve the freedom required to understand the software in the context of heterogeneous hardware.” Â