Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs

Jialin Dong, Da Zheng, Lin F. Yang, George Karypis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Graph neural networks (GNNs) are powerful tools for learning from graph data and are widely used in various applications such as social network recommendation, fraud detection, and graph search. The graphs in these applications are typically large, usually containing hundreds of millions of nodes. Training GNN models on such large graphs efficiently remains a big challenge. Despite a number of sampling-based methods have been proposed to enable mini-batch training on large graphs, these methods have not been proved to work on truly industry-scale graphs, which require GPUs or mixed CPU-GPU training. The state-of-the-art sampling-based methods are usually not optimized for these real-world hardware setups, in which data movement between CPUs and GPUs is a bottleneck. To address this issue, we propose Global Neighborhood Sampling that aims at training GNNs on giant graphs specifically for mixed CPU-GPU training. The algorithm samples a global cache of nodes periodically for all mini-batches and stores them in GPUs. This global cache allows in-GPU importance sampling of mini-batches, which drastically reduces the number of nodes in a mini-batch, especially in the input layer, to reduce data copy between CPU and GPU and mini-batch computation without compromising the training convergence rate or model accuracy. We provide a highly efficient implementation of this method and show that our implementation outperforms an efficient node-wise neighbor sampling baseline by a factor of 2× ∼ 4× on giant graphs. It outperforms an efficient implementation of LADIES with small layers by a factor of 2× ∼ 14× while achieving much higher accuracy than LADIES. We also theoretically analyze the proposed algorithm and show that with cached node data of a proper size, it enjoys a comparable convergence rate as the underlying node-wise sampling method.

Original languageEnglish (US)
Title of host publicationKDD 2021 - Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PublisherAssociation for Computing Machinery
Pages289-299
Number of pages11
ISBN (Electronic)9781450383325
DOIs
StatePublished - Aug 14 2021
Externally publishedYes
Event27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2021 - Virtual, Online, Singapore
Duration: Aug 14 2021Aug 18 2021

Publication series

NameProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

Conference

Conference27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2021
Country/TerritorySingapore
CityVirtual, Online
Period8/14/218/18/21

Bibliographical note

Publisher Copyright:
© 2021 Owner/Author.

Keywords

  • graph neural networks
  • mixed CPU-GPU training
  • neighbor sampling

Fingerprint

Dive into the research topics of 'Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs'. Together they form a unique fingerprint.

Cite this