5 Top Data Center Storage Trends 

Data center storage is a fascinating area. 

On the one hand, there are compliance mandates about storing certain material and never deleting it. There are also retention policies about how long data is to be kept for and afterward discarded. 

But the value of data should also be considered. Most data, it turns out, is looked at once and ignored thereafter. There are many factors to consider in data center storage.

Here are some of the top trends in the data center storage market: 

1. AI, cold, and hot storage 

When you see crowds of people taking high-definition videos and photos at concerts, sporting events, or parades with their smartphones, the bulk of the material will just be sent to a cloud data center and never accessed again. 

The good news is that cloud storage providers are becoming smarter at identifying these types of data-heavy, rarely accessed content using artificial intelligence (AI) for IT operations (AIOps) and routing them quickly into cold storage.

Cold storage is being used for archival data that is rarely accessed, said Steve Carlini, VP of innovation and data center, Schneider Electric

It’s cost-effective but takes the user much longer to access if ever needed. 

Hot storage, on the other hand, refers to fast, easy-to-access data storage. Technology for hot storage is moving away from rotating hard disk drive (HDD) technology to a static solid-state drive (SSD). The SSD is smaller and faster and uses a fraction of the energy an HDD uses. 

“SSDs traditionally cost a lot more, but capital costs are coming down and when you factor in the operating costs from powering and cooling, the TCO for SSD’s is becoming more favorable,” Carlini said. 

2. The need for speed 

Data-intensive workloads supporting database and analytics applications are increasingly requiring more compute and NVMe SSD storage resources, said Seth Bobroff, director of product marketing, Pliops.  

“A new class of data processors has emerged into data center architectures to address performance and storage management efficiency challenges that were once addressed by adding more CPUs,” Bobroff said. 

“These processors are overcoming the limitations of using traditional RAID technology with SSD deployments, ushering in a renaissance of RAID.” 

3. Storage efficiency 

As IT is asked to do more with less, storage systems will no longer be judged on their ability to deliver high performance or high capacity. 

Instead, they are being evaluated based on how they can deliver those requirements in the most efficient means possible. The standard enterprise-class flash drive can deliver up to 200K per drive. Current storage systems require dozens of these drives to provide a few hundred thousand IOPS. At the same time, most vendors shy away from using cost-effective, high-density (20 TB) hard disk drives because of the long recovery times when a drive fails.

“The modern storage solution needs to extract the full performance potential of flash drives by processing I/O more efficiently, reducing the flash drive requirement to as few as six drives,” said George Crump, CMO, StorOne

“It should also intelligently marry flash with a hard disk to lower costs while offering consistent performance between the two tiers.”

Hao Zhong, co-founder and CEO of ScaleFlux, agreed on the need for efficiency. 

“The idea that storage is cheap so let’s keep everything hasn’t aged well,” Zhong said. 

“As data growth accelerates faster than technology can match, we see compound costs of storage adding up and taking their toll — the cost of compliance, security, and multiple copies distributed to maintain performance.”   

Zhong sees greater effort in the direction of efficiency: via the processing of metadata to reduce the payload; pre-filtering to minimize network congestion; transparent compression features embedded into the drives; and increasing capacity density without burdening the CPU.  

“No single product can overcome the challenges associated with data growth, but the one we are most interested in is building better, smarter SSDs,” Zhong said.

4. Less moving and copying of data 

Another efficiency ploy is to minimize the amount of moving and copying of data. 

Organizations are spending too much money, time, and resources moving file and object data around. 

This ties back to the trend of taking stored data and moving and copying it to whatever application needed it. This spurred data capacity, management, and security challenges.

“Workflows can be designed to execute in real-time on remote, distributed datasets,” said Steve Wallo, CTO, Vcinity.

“Dynamic and agile application movement can now be done while the data stays in place, giving the apps full access and performance without network latency penalties.”  

5. Data dispersal 

Data locations are no longer just servers. 

Ten years ago, on average, companies kept their data in about five places at most. Now, companies are using more data sources than ever before: not just on-premises, but across multiple public cloud locations, including dozens of SaaS applications. 

Each data source of silo creates more than 150 places on average for mid-sized enterprises, even higher for larger enterprises, said Simon Taylor, founder and CEO, HYCU.

“The wide array of data locations presents a challenge for data protection and backup and recovery,” Taylor said. 

“As companies employ new data locations, they need to eliminate the sprawl, reduce the number of silos, and protect data in a unified and holistic way, while handling emerging and new threats, like the insidious rise of ransomware and increasing threat of cyberattacks.” 

Similar articles

Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.

Latest Articles