Microchip Makes ML, AI Advances with Flashtec SSD Controller

// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>

Microchip Technology Inc. is addressing artificial intelligence (AI) challenges both through its own controller technology as well as its subsidiary focused on low power in – memory technology for the edge.

Microchip’s PCIe Gen 5 NVMe 2.0 capable SSD controller, the Flashtec NVMe 4016, makes advances on the front speeds and feeds with 16 high-speed programmable NAND flash channels capable of up to 2,400 MT / s and delivers 14 GB / s throughput and more than 3 million IOPS. It also supports all the latest storage and performance compute applications, including Zoned Name Spaces (ZNS).

Microchip’s Flashtec NVMe 4016 includes a new, programmable machine learning engine capable of a variety of pattern recognition and classification functions (Source: Microchip) (Click image to enlarge)

Samer Haija, Microchip’s associate director of product management for data center solutions, said ZNS is still considered a niche, though the company does see increased deployments based on its controller.

“ZNS is a very promising technology that has had limited traction to date mainly due to the higher – level pieces needed to make it work at scale,” Haija said. But for ZNS to take off broadly, the SSD providers and the application providers need to develop a set of standards, tools, and drivers to take advantage of the technology in more data centers. “It was encouraging to see the Samsung and Western Digital announcement to drive standardization in this space,” Haija said.

While speed and performance are crucial to meeting AI demands, new pressures are being put on the flash; a challenge controller technology can help mitigate with NAND management at the back end. The programmable architecture of the Flashtec NVMe 4016 enables SSD developers to optimize product differentiation through firmware customization, and includes a new, programmable machine learning (ML) engine that’s capable of a variety of pattern recognition and classification functions that are employed in AI and ML applications .

The ML engine consists of input layers, zero or more hidden layers, and an output layer. The engine also contains an input layer responsible for receiving the input from an external source. The hidden layers analyze the data and perform learning processes with the help of neurons located within the hidden layer that contains weights and biases.

Based on those weights and biases, a neuron is activated when a threshold is reached, and the output layer provides the predicted output. Firmware in the NVMe SSD interfaces with the ML engine to send the model configuration, input, and training data, and receives the final output. Using output from the ML engine, the firmware performs the AI ​​actions.

“SSDs are typically designed for synthetic and generic workloads and most SSD design teams implement SSD and media management algorithms that do not have full awareness of the traffic that SSD will undergo in its life cycle,” Haija said. “An AI engine in the controller enables real time NAND management algorithm adaptation regardless of the type of workload the SSD is exposed to.”

Microchip’s dedicated engine frees up computing resources in the controller. At the same time, it’s still generic enough to develop application – agnostic AI / ML applications as well as balance performance, power, cost, and ease of use without compromising data integrity.

Microchip’s SSD controller business is part of a broader focus on data center solutions that’s not limited to AI, including PCIe switches and fabrics, PCIe / CXL retimers, and serial memory controllers.

SST’s SuperFlash memBrain used in WITINMEM’s ultra – low – power SoC (Source: SST) (Click image to enlarge)

In the meantime, the company’s subsidiary, Silicon Storage Technology (SST), is more focused on AI with computing – in – memory technologies designed to eliminate the data communication bottlenecks otherwise associated with performing AI speech processing at the network’s edge. SST’s SuperFlash memBrain neuromorphic memory solution has been successfully implemented into WITINMEM’s ultra – low – power SoC, which features computing – in – memory technology for neural networks processing including speech recognition, voice – print recognition, deep speech noise reduction, scene detection, and health status monitoring.

SST’s SuperFlash memBrain is a multi-level non-volatile memory solution supporting a computing – in – memory architecture for ML deep learning applications. It’s SuperFlash memBrain relies on the company’s standard SuperFlash cell, which is already in production in many foundries, according to Mark Reiten, vice president of SST’s license division. The purpose-built analog co-processor design ware has been in development since 2015 and can perform ML processing more efficiently than digital systems, he said.

The WITINMEM neural processing SoC is the first in volume production that enables sub – mA systems to reduce speech noise and recognize hundreds of command words, both in real time and immediately after power – up, Reiten said. The memBrain neuromorphic memory product is optimized to perform vector matrix multiplication for neural networks and enables processors used in battery – powered and deeply embedded edge devices to deliver the highest possible AI inference performance per watt.

The lower power consumption is achieved by storing the neural model weights as values ​​in the memory array and using the memory array as the neural compute element, Reiten said. It’s also cheaper to build because external DRAM and NOR are not required.

“As soon as you move these to DRAM your power consumption jumps up dramatically and the cost of the overall system jumps dramatically,” Reiten said. “That’s what we’re trying to get around.”

Permanently storing neural models inside the memBrain solution’s processing element also supports instant – on functionality for real-time neural network processing.

Many of the recent efforts to develop in-memory computing solutions for AI applications and neural networks have been around exploiting resistive – RAM (ReRAM), and SST has done some of its own in-house development. But Reiten explained it has limitations beyond single bit per cell because the programming of multiple cells is time consuming and has accuracy issues.

“Academics are playing with it, and they’re excited about it, but when you want to make something production worthy, it’s a whole different ballgame.”

Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times.

Related Articles:

Samsung, Western Digital Unite Around Zoned Storage

In-Memory Computing, AI Draws Research Interest

NAND Directs the Future of Memory Controllers

NVMe Controllers Look to Maximize NAND Potential

Micron Puts SSD into AI Mix

Leave a Comment