MangoBoost Sets New Benchmark for Multi-Node LLM Training on AMD GPUs in MLPerf Training v5.0
1. MangoBoost validated AMD MI300X GPUs in MLPerf Training v5.0. 2. Setting a benchmark, 32 GPUs fine-tuned Llama2-70B-LoRA in 10.91 minutes. 3. Achieved 95–100% scaling efficiency, optimizing AI training process. 4. Deep collaboration with AMD enhances performance and scalability. 5. First multi-node MLPerf training results showcasing flexibility and efficiency.