A new form of computer memory called PIM can speed up operations handled by processors like CPUs and GPUs that require complicated calculations. Because each memory module can process data independently, as its name implies, less data must travel between the memory and the CPU. Samsung first displayed the PIM-modified GPUs in October, but the 96 PIM-modified GPUs were just recently joined in a cluster. These customised MI100 chips outperformed standard video memory by 2.5 times while using 2.67 times less power, significantly improving the GPUs’ ability to run AI algorithms.
PIM has been under development by Samsung for some time. In 2021, the business demonstrated a number of systems using a variety of memory types, including DDR4, LPDDR5X, GDDR6, and HBM2. On a test program using a Meta AI workload, Samsung observed a 1.8x increase in performance in LPDDR5 form with a 42.6% reduction in power consumption and a 70% reduction in latency. Even more impressive, these results came from a typical server system with no motherboard or CPU modifications; instead, PIM-enabled LPDDR5 DIMMs were substituted.
PIM chips aren’t just being developed by Samsung. Early this year, SK Hynix also introduced its own PIM modules. In preliminary testing, SK Hynix found that its GDDR6-AiM (Accelerator in Memory) application increased AI processing speed by 16x while consuming 80% less power. That’s far faster than Samsung’s modified MI100s, but since the testing equipment used by SK Hynix is not known yet, there cannot be a direct comparison.