Home

Awesome

Legion-ATC23-Artifacts

Legion is a system for large-scale GNN training. Legion uses GPU to accelerate graph sampling, feature extraction and GNN training. And Legion utilizes multi-GPU memory as unified cache to minimize PCIe traffic. In this repo, we provide Legion's prototype and show how to run Legion. We provide two ways to build Legion: 1. building from source, 2. using pre-installed Legion. For artifacts evaluation, we recommend using the pre-installed Legion. Due to the machine limitation, we only show the functionality of Legion in the pre-installed environment.

Hardware in Our Paper

All platforms are bare-metal machines. Table 1

PlatformCPU-Info#sockets#NUMA nodesCPU MemoryPCIeGPUsNVLinks
DGX-V10096*Intel(R) Xeon(R) Platinum 8163 CPU @2.5GHZ21384GBPCIe 3.0x16, 4*PCIe switches, each connecting 2 GPUs8x16GB-V100NVLink Bridges, Kc = 2, Kg = 4
Siton104*Intel(R) Xeon(R) Gold 5320 CPU @2.2GHZ221TBPCIe 4.0x16, 2*PCIe switches, each connecting 4 GPUs8x40GB-A100NVLink Bridges, Kc = 4, Kg = 2
DGX-A100128*Intel(R) Xeon(R) Platinum 8369B CPU @2.9GHZ211TBPCIe 4.0x16, 4*PCIE switches, each connecting 2 GPUs8x80GB-A100NVSwitch, Kc = 1, Kg = 8

Kc means the number of groups in which GPUs connect each other. And Kg means the number of GPUs in each group.

Hardware We Can Support Now

Unfortunately, the platforms above are currently unavailable. Alternatively, we provide a stable machine with two GPUs: Table 2

PlatformCPU-Info#sockets#NUMA nodesCPU MemoryPCIeGPUsNVLinks
Siton2104*Intel(R) Xeon(R) Gold 5320 CPU @2.2GHZ22500GBPCIe 4.0x16, 2*PCIe switches, one connecting 2 GPUs2x80GB-A100NVLink Bridges, Kc = 1, Kg = 2

We will provide the way to access Siton2 in ATC artifacts submission.

Software

Legion's software is light-weighted and portable. Here we list some tested environment.

  1. Nvidia Driver Version: 515.43.04(DGX-A100, Siton, Siton2), 470.82.01(V100)

  2. CUDA 11.3(DGX-A100, Siton), CUDA 10.1(DGX-V100), CUDA 11.7(Siton2)

  3. GCC/G++ 9.4.0+(DGX-A100, Siton, DGX-V100), GCC/G++ 7.5.0+(Siton2)

  4. OS: Ubuntu(other linux systems are ok)

  5. Intel PCM(according to OS version)

$ wget https://download.opensuse.org/repositories/home:/opcm/xUbuntu_18.04/amd64/pcm_0-0+651.1_amd64.deb
  1. pytorch-cu113(DGX-A100, Siton), pytorch-cu101(DGX-V100), pytorch-cu117(Siton2), torchmetrics
$ pip3 install torch-cu1xx
  1. dgl 0.9.1(DGX-A100, Siton, DGX-V100) dgl 1.1.0(Siton2)
$ pip3 install  dgl -f https://data.dgl.ai/wheels/cu1xx/repo.html
  1. MPI

Datasets

Table 3

DatasetsPRPACOUKSUKLCL
#Vertices2.4M111M65M133M0.79B1B
#Edges120M1.6B1.8B5.5B47.2B42.5B
Feature Size100128256256128128
Topology Storage640MB6.4GB7.2GB22GB189GB170GB
Feature Storage960MB56GB65GB136GB400GB512GB
Class Number4722222

We store the pre-processed datasets in path of Siton2: /home/atc-artifacts-user/datasets. We also place the partitioning result for demos in Siton2 so that you needn't wait a lot of time for partitioning.

Use Pre-installed Legion

There are four steps to train a GNN model in Legion. In these steps, you need to change into root user of Siton2.

Step 1. Add environment variables temporarily

1. $ cd /home/atc-artifacts-user/legion-atc-artifacts/src/ && source env.sh

Step 2. Open msr by root for PCM

2. $ modprobe msr

After these two steps, you need prepare two sessions to run Legion's sampling server and training backend separately.

Step 3. Run Legion sampling server

In Siton2, we can test Legion in two mode: NVLink, no NVLink. User can modify these parameters:

Choose dataset

argparser.add_argument('--dataset_path', type=str, default="/home/atc-artifacts-user/datasets")
argparser.add_argument('--dataset', type=str, default="PR")

You can change "PR" into "PA", "CO", "UKS", "UKL", "CL".

Set sampling hyper-parameters

argparser.add_argument('--train_batch_size', type=int, default=8000)
argparser.add_argument('--epoch', type=int, default=10)

Set GPU number, GPU meory limitation and whether to use NVLinks

argparser.add_argument('--gpu_number', type=int, default=1)
argparser.add_argument('--cache_memory', type=int, default=200000000) ## default is 200000000 Bytes
argparser.add_argument('--usenvlink', type=int, default=1)## 1 means true, 0 means false.

Start server

3. $ cd /home/atc-artifacts-user/legion-atc-artifacts/ && python3 legion_server.py

Sampling server functionality

This figure shows that PCM is working.

7164f5c512559008fda789051ee3846

This figure shows the system outputs including dataset statistics, training statistics and cache management outputs.

fe485222ae227d406bab1068eb1bed9

Step 4. Run Legion training backend

After Legion outputs "System is ready for serving", run the training backend by artifact-user. "legion_graphsage.py" and "legion_gcn.py" trains the GraphSAGE/GCN models, respectively. User can modify these parameters:

Set dataset statistics

For specific numbers, please refer to Table 3(dataset).

    argparser.add_argument('--class_num', type=int, default=47)
    argparser.add_argument('--features_num', type=int, default=100)

Set GNN hyper-parameters

These are the default setting in Legion.
argparser.add_argument('--train_batch_size', type=int, default=8000) 
argparser.add_argument('--hidden_dim', type=int, default=256)
argparser.add_argument('--drop_rate', type=float, default=0.5)
argparser.add_argument('--learning_rate', type=float, default=0.003)
argparser.add_argument('--epoch', type=int, default=10)
argparser.add_argument('--gpu_num', type=int, default=1) 

Note that the train_batch_size, epoch, and gpu_num should be the same as sampling hyper-parameters

Start training backend

3. $ cd /home/atc-artifacts-user/legion-atc-artifacts/pytorch-extension/ && python3 legion_graphsage.py

Training backend functionality

When training backend successfully runs, system outputs information including epoch time, validation accuracy, and testing accuracy.

30685d2d9a729ce84d52e8b72fcc1cb

image

If SEGMENT-FAULT occurs or you kill Legion's processes, please remove semaphores in /dev/shm, for example: 14b24058fbcfe5bf0648f0d7082686a

Build Legion from Source

$ git clone https://github.com/JIESUN233/Legion.git

Prepare Graph Partitioning Tool: XtraPulp

Prepare MPI in the machine and download XtraPulp

1. $ git clone https://github.com/luoxiaojian/xtrapulp.git

To make:

1.) Set MPICXX in Makefile to your c++ compiler, adjust CXXFLAGS if necessary -OpenMP 3.1 support is required for parallel execution -No other dependencies needed

Then make xtrapulp executable and library

2. $ cd xtrapulp/ && make 

This will just make libxtrapulp.a static library for use with xtrapulp.h

3. $ make libxtrapulp

Legion Compiling

Firstly, build Legion's sampling server

1. $ cd /home/atc-artifacts-user/legion-atc-artifacts/src/

2. $ make cuda && make main

Secondly, build Legion's training backend

3. $ cd /home/atc-artifacts-user/legion-atc-artifacts/pytorch_extension/

Change into root user and execute:

4. $ python3 setup.py install

Run Legion

Similar to the way in using pre-installed Legion