February 11, 2025

Addressing Data Storage Changes in the AI Era

Addressing Data Storage Changes in the AI Era

This article was originally published on EE Times Asia by Nelson Duann, Senior VP of Client and Automotive Storage Business at Silicon Motion

お問い合わせ

Artificial Intelligence (AI) has quickly become a driving force in our daily lives, influencing everything from smartphones to healthcare. People are now encountering AI in diverse contexts. They experience benefits like real-time language translation, faster healthcare triage, and better digital photographs—all of which rely on the ability to move and process large amounts of data at high speeds.

The rapid rise of AI, both at the edge and in the cloud, hinges on the seamless movement of data. The volume and speed required for AI to process information are staggering as AI applications like generative AI tools have emerged. These tools generate text, images, video, audio, and code. AI is everywhere, from tiny wearables to powerful applications like genome sequencing. This is just the beginning of a broader transformation where AI will become deeply ingrained in every aspect of our lives.

Evolving Demands on Computing Platforms

The transition from AI being merely a classification tool to serving generative use cases, which are far more computationally intensive, has been driven by advancements in hardware. The need for faster processors and heterogeneous computing has paved the way for AI-specific accelerators. Duann emphasises the significance of advancements in memory technologies. The development of faster memory interfaces and greater memory capacity has been critical in enabling AI's growth.

AI's hunger for data has caused a surge in demand for higher interface bandwidth and more efficient memory architectures. AI is both a consumer and creator of massive amounts of data, especially in the context of generative AI, where the outputs—such as synthesised images, audio, and video—are generated in vast quantities.

AI's Impact on Storage Requirements

The growing prevalence of AI applications, especially generative AI like ChatGPT, has escalated the pressure on storage systems. Most current AI applications are cloud-based, hosted in massive data centres operated by tech giants like Google, Microsoft, and Meta. These data centres have invested heavily in hardware capable of supporting the demanding nature of AI workloads.

However, as AI begins to migrate to edge platforms such as PCs, smartphones, and even cars, the storage and processing requirements will change. Duann points out that privacy concerns, among other factors, drive this shift to edge AI. Edge AI helps preserve user privacy more effectively by ensuring that AI processing happens locally on devices, but this will create an enormous increase in demand for memory and processing power on everyday consumer devices.

The Future of NAND Flash Storage for AI

As AI moves to the edge, it requires storage solutions that can meet increasing demands for density, speed, and efficiency. Duann highlights that NAND Flash technology plays a key role in meeting these demands, with innovations like TLC (Three-Level Cell) and QLC (Quad-Level Cell) structures pushing capacity limits even further. We now have 3D NAND stacking technology with over 200 layers, and terabyte-level SSDs are becoming commonplace.

Silicon Motion has been at the forefront of NAND Flash development. The company was the first to introduce a controller supporting QLC NAND SSDs, setting a high standard for performance and durability in AI applications.

Large language models (LLMs), such as GPT, exemplify AI's strain on storage systems. With billions of parameters, these transformer-based models demand immense storage and continuous read/write operations during the training and inference stages. As LLMs and other generative models evolve, NAND storage will need to meet higher speed, capacity, and power efficiency demands.

Silicon Motion's Strategy for the AI Era

In response to AI's growing role in consumer devices, Silicon Motion has focused on developing faster, higher-capacity memory solutions that consume less power. We are committed to advancing SSD density while finding innovative ways to increase capacity without proportionally increasing power consumption.

One area of focus for Silicon Motion is leveraging cutting-edge process geometries from their foundry partners and new techniques like power islands for granular chip control. Additionally, the company is exploring advanced data organisation methods, such as Flexible Data Placement (FDP) for solid-state drives (SSDs) and Zone Name Space (ZNS) for mobile applications, ensuring that storage solutions are tailored to the varying requirements of different devices.

As example of Silicon Motion's innovations is the recently launched SM2756 UFS 4.1 controller. The SM2756, manufactured with a low-power 6nm process, supports sequential read/write speeds of 4300/4000+ MB/s and can handle up to 2TB of 3D TLC and QLC NAND Flash—making it ideal for AI mobile applications.

Overcoming Challenges Ahead

As AI continues to evolve, the challenges surrounding data storage will intensify. Large AI models, like LLMs, require enormous amounts of data transfer between devices and the cloud, emphasising operational and energy efficiency. Silicon Motion's robust expertise in NAND Flash management, which is essential for meeting these demands.

Looking to the future, Silicon Motion is doubling down on adaptable firmware development, enhanced power efficiency, and support for cutting-edge interfaces like UFS 4.1 for AI smartphones and PCIe Gen5 low-power SSD controllers for AI PCs. The company's MonTitan SSD controller platform is also positioned to support the intense demands of data centre storage for AI.

The AI wind is strong, and it's blowing toward the edge. Every mobile user will soon be running personal AI models on their phones, requiring massive storage that can handle faster data exchanges while maintaining energy efficiency. At Silicon Motion, we are constantly pushing forward with new memory structures and techniques to meet these demands.

As AI continues to expand its reach, Silicon Motion remains committed to advancing memory solutions that empower next-generation AI applications—whether they reside in data centres, edge devices, or the smartphones in our pockets.

お問い合わせ