Peter Zhang
Jun 04, 2025 18:17
NVIDIA outlines the method to copy MLPerf v5.0 coaching scores for LLM benchmarks, emphasizing {hardware} stipulations and step-by-step execution.
NVIDIA has detailed the method for reproducing coaching scores from the MLPerf v5.0 benchmarks, particularly specializing in Llama 2 70B LoRA fine-tuning and Llama 3.1 405B pretraining. This initiative follows NVIDIA’s earlier announcement of attaining as much as 2.6x increased efficiency in MLPerf Coaching v5.0, as reported by Sukru Burc Eryilmaz on the NVIDIA weblog. The benchmarks are a part of MLPerf’s complete analysis suite aimed toward measuring the efficiency of machine studying fashions.
Stipulations for Benchmarking
To run these benchmarks, particular {hardware} and software program necessities should be met. For Llama 2 70B LoRA, an NVIDIA DGX B200 or GB200 NVL72 system is critical, whereas the Llama 3.1 405B requires not less than 4 GB200 NVL72 techniques linked through InfiniBand. Moreover, substantial disk area is required: 2.5 TB for Llama 3.1 and 300 GB for LoRA fine-tuning.
Cluster and Atmosphere Setup
NVIDIA makes use of a cluster setup managed by the NVIDIA Base Command Supervisor (BCM), which requires an setting based mostly on Slurm, Pyxis, and Enroot. Quick native storage configured in RAID0 is really helpful to reduce information bottlenecks. Networking ought to incorporate NVIDIA NVLink and InfiniBand for optimum efficiency.
Executing the Benchmarks
The execution course of entails a number of steps, beginning with constructing a Docker container and downloading needed datasets and checkpoints. The benchmarks are run utilizing SLURM, with a configuration file detailing hyperparameters and system settings. The method is designed to be versatile, permitting for changes based mostly on completely different system sizes and necessities.
Analyzing Benchmark Logs
In the course of the benchmarking course of, logs are generated that embrace key MLPerf markers. These logs present insights into initialization, coaching progress, and remaining accuracy. The final word purpose is to realize a goal analysis loss, which alerts the profitable completion of the benchmark.
For extra detailed directions, together with particular scripts and configuration examples, discuss with the NVIDIA weblog.
Picture supply: Shutterstock