Home

Awesome

GPU-Benchmarks-on-LLM-Inference

Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference? 🧐

Description

Use llama.cpp to test the LLaMA models inference speed of different GPUs on RunPod, 13-inch M1 MacBook Air, 14-inch M1 Max MacBook Pro, M2 Ultra Mac Studio and 16-inch M3 Max MacBook Pro for LLaMA 3.

Overview

Average speed (tokens/s) of generating 1024 tokens by GPUs on LLaMA 3. Higher speed is better.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB70.94OOMOOMOOM
3080 10GB106.40OOMOOMOOM
3080 Ti 12GB106.71OOMOOMOOM
4070 Ti 12GB82.21OOMOOMOOM
4080 16GB106.2240.29OOMOOM
RTX 4000 Ada 20GB58.5920.85OOMOOM
3090 24GB111.7446.51OOMOOM
4090 24GB127.7454.34OOMOOM
RTX 5000 Ada 32GB89.8732.67OOMOOM
3090 24GB * 2108.0747.1516.29OOM
4090 24GB * 2122.5653.2719.06OOM
RTX A6000 48GB102.2240.2514.58OOM
RTX 6000 Ada 48GB130.9951.9718.36OOM
A40 48GB88.9533.9512.08OOM
L40S 48GB113.6043.4215.31OOM
RTX 4000 Ada 20GB * 456.1420.587.33OOM
A100 PCIe 80GB138.3154.5622.11OOM
A100 SXM 80GB133.3853.1824.33OOM
H100 PCIe 80GB144.4967.7925.01OOM
3090 24GB * 4104.9446.4016.89OOM
4090 24GB * 4117.6152.6918.83OOM
RTX 5000 Ada 32GB * 482.7331.9411.45OOM
3090 24GB * 6101.0745.5516.935.82
4090 24GB * 8116.1352.1218.766.45
RTX A6000 48GB * 493.7338.8714.324.74
RTX 6000 Ada 48GB * 4118.9950.2517.966.06
A40 48GB * 483.7933.2811.913.98
L40S 48GB * 4105.7242.4814.995.03
A100 PCIe 80GB * 4117.3051.5422.687.38
A100 SXM 80GB * 497.7045.4519.606.92
H100 PCIe 80GB * 4118.1462.9026.209.63
M1 7‑Core GPU 8GB9.72OOMOOMOOM
M1 Max 32‑Core GPU 64GB34.4918.434.09OOM
M2 Ultra 76-Core GPU 192GB76.2836.2512.134.71
M3 Max 40‑Core GPU 64GB50.7422.397.53OOM

Average 1024 tokens prompt eval speed (tokens/s) by GPUs on LLaMA 3.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB2283.62OOMOOMOOM
3080 10GB3557.02OOMOOMOOM
3080 Ti 12GB3556.67OOMOOMOOM
4070 Ti 12GB3653.07OOMOOMOOM
4080 16GB5064.996758.90OOMOOM
RTX 4000 Ada 20GB2310.532951.87OOMOOM
3090 24GB3865.394239.64OOMOOM
4090 24GB6898.719056.26OOMOOM
RTX 5000 Ada 32GB4467.465835.41OOMOOM
3090 24GB * 24004.144690.50393.89OOM
4090 24GB * 28545.0011094.51905.38OOM
RTX A6000 48GB3621.814315.18466.82OOM
RTX 6000 Ada 48GB5560.946205.44547.03OOM
A40 48GB3240.954043.05239.92OOM
L40S 48GB5908.522491.65649.08OOM
RTX 4000 Ada 20GB * 43369.244366.64306.44OOM
A100 PCIe 80GB5800.487504.24726.65OOM
A100 SXM 80GB5863.92681.47796.81OOM
H100 PCIe 80GB7760.1610342.63984.06OOM
3090 24GB * 44653.935713.41350.06OOM
4090 24GB * 49609.2912304.19898.17OOM
RTX 5000 Ada 32GB * 46530.782877.66541.54OOM
3090 24GB * 65153.055952.55739.40927.23
4090 24GB * 89706.8211818.921336.261890.48
RTX A6000 48GB * 45340.106448.85539.20792.23
RTX 6000 Ada 48GB * 49679.5512637.94714.931270.39
A40 48GB * 44841.985931.06263.36900.79
L40S 48GB * 49008.272541.61634.051478.83
A100 PCIe 80GB * 48889.3511670.74978.061733.41
A100 SXM 80GB * 47782.25674.11539.081834.16
H100 PCIe 80GB * 411560.2315612.811133.232420.10
M1 7‑Core GPU 8GB87.26OOMOOMOOM
M1 Max 32‑Core GPU 64GB355.45418.7733.01OOM
M2 Ultra 76-Core GPU 192GB1023.891202.74117.76145.82
M3 Max 40‑Core GPU 64GB678.04751.4962.88OOM

Model

Thanks to shawwn for LLaMA model weights (7B, 13B, 30B, 65B): llama-dl. Access LLaMA 2 from Meta AI. Access LLaMA 3 from Meta Llama 3 on Hugging Face or my Hugging Face repos: Xiongjie Dai.

Usage

Build

Text Completion

Use argument -ngl 0 to only use the CPU for inference and -ngl 10000 to ensure all layers are offloaded to the GPU.

!./main -ngl 10000 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf --color --temp 1.1 --repeat_penalty 1.1 -c 0 -n 1024 -e -s 0 -p """\
First Citizen:\n\n\
Before we proceed any further, hear me speak.\n\n\
\n\n\
All:\n\n\
Speak, speak.\n\n\
\n\n\
First Citizen:\n\n\
You are all resolved rather to die than to famish?\n\n\
\n\n\
All:\n\n\
Resolved. resolved.\n\n\
\n\n\
First Citizen:\n\n\
First, you know Caius Marcius is chief enemy to the people.\n\n\
\n\n\
All:\n\n\
We know't, we know't.\n\n\
\n\n\
First Citizen:\n\n\
Let us kill him, and we'll have corn at our own price. Is't a verdict?\n\n\
\n\n\
All:\n\n\
No more talking on't; let it be done: away, away!\n\n\
\n\n\
Second Citizen:\n\n\
One word, good citizens.\n\n\
\n\n\
First Citizen:\n\n\
We are accounted poor citizens, the patricians good. What authority surfeits on would relieve us: if they would yield us but the superfluity, \
while it were wholesome, we might guess they relieved us humanely; but they think we are too dear: the leanness that afflicts us, the object of \
our misery, is as an inventory to particularise their abundance; our sufferance is a gain to them Let us revenge this with our pikes, \
ere we become rakes: for the gods know I speak this in hunger for bread, not in thirst for revenge.\n\n\
\n\n\
"""

Note: For Apple Silicon, check the recommendedMaxWorkingSetSize in the result to see how much memory can be allocated on the GPU and maintain its performance. Only 70% of unified memory can be allocated to the GPU on 32GB M1 Max right now, and we expect around 78% of usable memory for the GPU on larger memory. (Source: https://developer.apple.com/videos/play/tech-talks/10580/?time=346) To utilize the whole memory, use -ngl 0 to only use the CPU for inference. (Thanks to: https://github.com/ggerganov/llama.cpp/pull/1826)

Chat template for LLaMA 3 🦙🦙🦙

!./main -ngl 10000 -m ./models/8B-v3-instruct/ggml-model-Q4_K_M.gguf --color -c 0 -n -2 -e -s 0 --mirostat 2 -i --no-display-prompt --keep -1 \
-r '<|eot_id|>' -p '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHi!<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' \
--in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n'

Benchmark

!./llama-bench -p 512,1024,4096,8192 -n 512,1024,4096,8192 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf

Total VRAM Requirements

ModelQuantized size (Q4_K_M)Original size (f16)
8B4.58 GB14.96 GB
70B39.59 GB131.42 GB

You may estimate that VRAM requirement using this tool: LLM RAM Calculator

Perplexity table on LLaMA 3 70B

Less perplexity is better. (credit to: dranger003)

QuantizationSize (GiB)Perplexity (wiki.test)Delta (FP16)
IQ1_S14.299.8655 +/- 0.0625248.51%
IQ1_M15.608.5193 +/- 0.0530201.94%
IQ2_XXS17.796.6705 +/- 0.0405135.64%
IQ2_XS19.695.7486 +/- 0.0345103.07%
IQ2_S20.715.5215 +/- 0.031895.05%
Q2_K_S22.795.4334 +/- 0.032591.94%
IQ2_M22.464.8959 +/- 0.027672.35%
Q2_K24.564.7763 +/- 0.027468.73%
IQ3_XXS25.583.9671 +/- 0.021140.14%
IQ3_XS27.293.7210 +/- 0.019131.45%
Q3_K_S28.793.6502 +/- 0.019228.95%
IQ3_S28.793.4698 +/- 0.017422.57%
IQ3_M29.743.4402 +/- 0.017121.53%
Q3_K_M31.913.3617 +/- 0.017218.75%
Q3_K_L34.593.3016 +/- 0.016816.63%
IQ4_XS35.303.0310 +/- 0.01497.07%
IQ4_NL37.303.0261 +/- 0.01496.90%
Q4_K_S37.583.0050 +/- 0.01486.15%
Q4_K_M39.602.9674 +/- 0.01464.83%
Q5_K_S45.322.8843 +/- 0.01411.89%
Q5_K_M46.522.8656 +/- 0.01391.23%
Q6_K53.912.8441 +/- 0.01380.47%
Q8_069.832.8316 +/- 0.01380.03%
F16131.432.8308 +/- 0.01380.00%

Benchmarks

TG means "text-generation," and PP means "prompt processing." # for total generated/processing tokens. OOM means out of memory. Average speed in tokens/s.

LLaMA 3 🦙🦙🦙:

NVIDIA Gaming GPUs (OS: Ubuntu 22.04.2 LTS, pytorch:2.2.0, py: 3.10, cuda: 12.1.1 on RunPod) (snapshots in May 2024)

GPUModeltg 512tg 1024tg 4096tg 8192pp 512pp 1024pp 4096pp 8192
3070 8GB8B Q4_K_M72.7970.9467.0161.642402.512283.621826.591419.97
3080 10GB8B Q4_K_M109.57106.4098.6789.903728.863557.022852.062232.21
3080 Ti 12GB8B Q4_K_M110.60106.7198.3488.633690.303556.672947.112381.52
4070 Ti 12GB8B Q4_K_M83.5082.2178.5973.463936.293653.072729.712019.71
4080 16GB8B Q4_K_M108.15106.22100.4493.715389.745064.993790.962882.03
8B F1640.5840.2939.44OOM7246.976758.904720.22OOM
3090 24GB8B Q4_K_M115.42111.7497.3187.494030.403865.393169.912527.40
8B F1647.4046.5144.7942.624444.654239.643410.472667.14
4090 24GB8B Q4_K_M130.58127.74119.44110.667138.996898.715265.684039.68
8B F1654.8454.3452.6350.889382.009056.266531.364744.18
3090 24GB * 28B Q4_K_M111.67108.0799.6090.773336.374004.144013.343433.59
8B F1647.7247.1545.5643.614122.664690.504788.603851.37
70B Q4_K_M16.5716.2915.3614.34357.32393.89379.52338.82
4090 24GB * 28B Q4_K_M124.65122.56114.32106.187003.518545.008422.046895.68
8B F1653.6453.2751.6449.839177.9211094.5110329.298067.29
70B Q4_K_M19.2219.0618.5417.92839.43905.38846.38723.24
3090 24GB * 48B Q4_K_M108.66104.9497.0988.353742.664653.935826.914913.40
8B F1647.0746.4044.7642.814608.405713.416596.175361.52
70B Q4_K_M17.0716.8916.2415.39300.79350.06367.75331.37
4090 24GB * 48B Q4_K_M120.32117.61110.52103.136748.969609.2912491.1010993.75
8B F1653.1052.6951.0049.218750.5712304.1915143.8412919.74
70B Q4_K_M19.8018.8318.3517.66834.74898.17839.97718.01
3090 24GB * 68B Q4_K_M104.17101.0794.0685.933359.995153.057690.657084.44
8B F1646.2345.5543.9942.153875.975952.559437.918780.49
70B Q4_K_M17.0916.9316.3215.45456.95739.40786.79695.44
70B F165.855.825.765.53579.00927.23998.79813.99
4090 24GB * 88B Q4_K_M118.09116.13108.37100.956172.069706.8215089.4513802.08
8B F1652.5152.1250.3948.727889.2611818.9216462.1814300.98
70B Q4_K_M18.9418.7618.2317.57812.951336.261488.361320.36
70B F166.476.456.396.311183.871890.482311.431995.85

NVIDIA Professional GPUs (OS: Ubuntu 22.04.2 LTS, pytorch:2.2.0, py: 3.10, cuda: 12.1.1 on RunPod) (snapshots in May 2024)

GPUModeltg 512tg 1024tg 4096tg 8192pp 512pp 1024pp 4096pp 8192
RTX 4000 Ada 20GB8B Q4_K_M59.1558.5955.9452.392451.932310.531798.011337.15
8B F1620.9220.8520.5020.013121.672951.872200.581557.00
RTX 5000 Ada 32GB8B Q4_K_M91.3989.8785.0180.004761.124467.463272.942422.33
8B F1632.8432.6732.0431.276160.575835.414008.302808.89
RTX A6000 48GB8B Q4_K_M105.39102.2294.8286.733780.553621.812917.232292.61
8B F1640.7140.2539.1437.734511.024315.183365.792566.46
70B Q4_K_M14.7114.5814.0913.42482.19466.82404.61340.73
RTX 6000 Ada 48GB8B Q4_K_M133.44130.99120.74111.575791.745560.944495.193542.57
8B F1652.3251.9750.2148.796663.136205.444969.463915.81
70B Q4_K_M18.5218.3617.8016.97565.98547.03481.59419.76
A40 48GB8B Q4_K_M91.2788.9583.1076.453324.983240.952586.502013.34
8B F1634.2633.9533.0631.934203.754043.053069.982295.02
70B Q4_K_M11.6012.0811.6811.26209.38239.92268.89291.13
L40S 48GB8B Q4_K_M115.55113.60105.5097.986035.245908.524335.183192.70
8B F1643.6943.4242.2241.052253.932491.652887.703312.16
70B Q4_K_M15.4615.3114.9214.45673.63649.08542.29446.48
RTX 4000 Ada 20GB * 48B Q4_K_M56.6456.1453.5850.192413.073369.244404.453733.15
8B F1620.6520.5820.2419.743220.214366.645366.394323.70
70B Q4_K_M7.367.337.126.84282.28306.44290.70243.45
A100 PCIe 80GB8B Q4_K_M140.62138.31127.22117.605981.045800.484959.844083.37
8B F1654.8454.5653.0251.247741.347504.246137.544849.11
70B Q4_K_M22.3122.1120.9319.53744.12726.65653.20573.95
A100 SXM 80GB8B Q4_K_M135.04133.38125.09115.925947.645863.925121.604137.08
8B F1653.4953.1852.0350.52603.76681.47866.131323.07
70B Q4_K_M24.6124.3322.9121.32817.58796.81714.07625.66
H100 PCIe 80GB8B Q4_K_M145.55144.49136.06126.838125.457760.166423.315185.03
8B F1668.0367.7965.9763.5510815.5110342.638106.536191.45
70B Q4_K_M25.0325.0123.8222.391012.73984.06863.37741.52
RTX 5000 Ada 32GB * 48B Q4_K_M84.0782.7378.4574.114671.346530.788004.946790.82
8B F1632.1031.9431.3230.582427.962877.663836.895235.00
70B Q4_K_M11.5111.4511.2410.94502.37541.54504.23424.29
RTX A6000 48GB * 48B Q4_K_M96.4893.7387.7280.883712.995340.107126.456438.82
8B F1639.3438.8737.8136.514508.606448.858327.167298.18
70B Q4_K_M14.4414.3213.9113.32496.08539.20511.22434.31
70B F164.764.744.704.63510.31792.23751.37748.06
RTX 6000 Ada 48GB * 48B Q4_K_M121.21118.99110.65103.186640.869679.5511734.8510278.14
8B F1650.6150.2548.6947.188953.3012637.9413971.3411702.36
70B Q4_K_M18.1317.9617.4916.89656.61714.93697.10612.54
70B F166.086.066.015.94864.121270.391363.751182.28
A40 48GB * 48B Q4_K_M85.9183.7978.5672.703321.274841.986442.385742.84
8B F1633.6033.2832.4231.384144.885931.067544.926516.60
70B Q4_K_M11.9911.9111.6011.17236.86263.36300.57312.31
70B F163.993.983.953.90610.51900.79893.28735.16
L40S 48GB * 48B Q4_K_M107.53105.7298.5992.206125.699008.2710566.979017.90
8B F1642.7042.4841.3340.192211.452541.613093.334336.81
70B Q4_K_M15.1214.9914.6314.17591.05634.05605.66541.67
70B F165.055.034.994.941042.131478.831427.771150.63
A100 PCIe 80GB * 48B Q4_K_M119.28117.30110.75103.876076.588889.3512724.5411803.39
8B F1651.6351.5450.2048.738088.7911670.7416025.1114269.17
70B Q4_K_M22.9122.6821.4119.96771.28978.061138.601043.15
70B F167.407.387.237.061172.141733.411846.361592.37
A100 SXM 80GB * 48B Q4_K_M99.7397.7092.0986.274850.887782.2512242.5311535.66
8B F1645.5345.4544.3343.09626.75674.111003.371612.05
70B Q4_K_M19.8719.6018.4817.19468.86539.08712.08802.23
70B F166.956.926.776.581233.311834.161972.481699.56
H100 PCIe 80GB * 48B Q4_K_M123.08118.14113.12110.348054.5811560.2316128.2714682.97
8B F1664.0062.9061.4559.7211107.4015612.8120561.0317762.96
70B Q4_K_M26.4026.2024.6023.681048.291133.231088.99950.92
70B F169.679.639.469.231681.452420.102437.532031.77

Apple Silicon (snapshots in May 2024)

GPUModeltg 512tg 1024tg 4096tg 8192pp 512pp 1024pp 4096pp 8192
M1 7‑Core GPU 8GB8B Q4_K_M10.209.7211.77OOM94.4887.2696.53OOM
M1 Max 32‑Core GPU 64GB8B Q4_K_M35.7334.4931.1826.84408.23355.45329.84302.92
8B F1618.7518.4316.3315.03517.34418.77374.09351.46
70B Q4_K_M4.344.094.093.7134.9633.0132.6430.97
M2 Ultra 76-Core GPU 192GB8B Q4_K_M78.8176.2864.5854.13994.041023.89979.47913.55
8B F1636.9036.2533.6730.681175.401202.741194.211103.44
70B Q4_K_M12.4812.1310.759.34118.79117.76109.53108.57
70B F164.764.714.484.23147.58145.82133.75135.15
M3 Max 40‑Core GPU 64GB8B Q4_K_M48.9750.7444.2136.12693.32678.04573.09505.32
8B F1622.0422.3920.7218.74769.84751.49609.97515.15
70B Q4_K_M7.657.536.585.6070.1962.8864.9061.96

Conclusion

Same performance under the same size and quantization models. Multiple NVIDIA GPUs might affect text-generation performance but can still boost the prompt processing speed.

Buy NVIDIA gaming GPUs to save money. Buy professional GPUs for your business. Buy a Mac if you want to put your computer on your desk, save energy, be quiet, don't wanna maintenance, and have more fun. 😇

If you find this information helpful, please give me a star. ⭐️ Feel free to contact me if you have any advice. Thank you. 🤗