AI in EE

AI IN DIVISIONS

AI in Computer Division

Youngeun Kwon and Minsoo Rhu, ” Training Personalized Recommendation Systems from (GPU) Scratch: Look Forward not Backwards,” The 49th IEEE/ACM International Symposium on Computer Architecture (ISCA-49), New York, NY, June 2022

Abstract:

 Personalized recommendation models (RecSys) are one of the most popular machine learning workload serviced by hyperscalers. A critical challenge of training RecSys is its high memory capacity requirements, reaching hundreds of GBs to TBs of model size. In RecSys, the so-called embedding layers account for the majority of memory usage so current systems employ a hybrid CPU-GPU design to have the large CPU memory store the memory hungry embedding layers. Unfortunately, training embeddings involve several memory bandwidth intensive operations which is at odds with the slow CPU memory, causing performance overheads. Prior work proposed to cache frequently accessed embeddings inside GPU memory as means to filter down the embedding layer traffic to CPU memory, but this paper observes several limitations with such cache design. In this work, we present a fundamentally different approach in designing embedding caches for RecSys. Our proposed ScratchPipe architecture utilizes unique properties of RecSys training to develop an embedding cache that not only sees the past but also the “future” cache accesses. ScratchPipe exploits such property to guarantee that the active working set of embedding layers can “always” be captured inside our proposed cache design, enabling embedding layer training to be conducted at GPU memory speed. 

 

유민수2