StockNews.AI
CRWV
StockNews.AI
76 days

CoreWeave, NVIDIA and IBM Submit Largest-Ever MLPerf Results on NVIDIA GB200 Grace Blackwell Superchips

1. CoreWeave achieved largest-ever MLPerf Training v5.0 submission with NVIDIA GPUs. 2. Their AI platform demonstrated over 2x faster training performance than competitors. 3. Submission significantly boosts CoreWeave's leadership in AI cloud services. 4. Faster model development and cost optimization for customers highlighted. 5. CoreWeave ranked in Platinum tier of SemiAnalysis' ClusterMAX framework.

5m saved
Insight
Article

FAQ

Why Very Bullish?

The significant achievement in benchmark results may attract new clients and investments, increasing revenue potential.

How important is it?

The achievement could enhance CoreWeave's competitive edge, thereby significantly impacting its market position and stock value.

Why Long Term?

Sustained leadership in AI infrastructure likely leads to continued growth, much like previous tech advancements by notable AI companies.

Related Companies

Submission with nearly 2,500 NVIDIA GB200 GPUs achieved breakthrough results on most complex benchmarking model , /PRNewswire/ -- CoreWeave (Nasdaq: CRWV), in collaboration with NVIDIA and IBM, delivered the largest-ever MLPerf® Training v5.0 submission on NVIDIA Blackwell, using 2,496 NVIDIA Blackwell GPUs running on CoreWeave's AI-optimized cloud platform. This submission is the largest NVIDIA GB200 NVL72 cluster ever benchmarked under MLPerf, 34x larger than the only other submission from a cloud provider highlighting the large scale and readiness of CoreWeave's cloud platform for today's demanding AI workloads. The submission achieved a breakthrough result on the largest and most complex foundational model in the benchmarking suite–Llama 3.1 405B–completing the run in just 27.3 minutes. When compared against submissions from other participants across similar cluster sizes, CoreWeave's GB200 cluster achieved more than 2x faster training performance. This result highlights the significant performance leap enabled by the GB200 NVL72 architecture and the strength of CoreWeave's infrastructure in delivering consistent, best-in-class AI workload performance. "AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand," said Peter Salanki, Chief Technology Officer and Co-founder at CoreWeave. "These MLPerf results reinforce our leadership in supporting today's most demanding AI workloads." These results matter because they translate directly to faster model development cycles and an optimized Total Cost of Ownership. For CoreWeave customers, that means cutting training time in half, scaling workloads efficiently, and training or deploying their models more cost-effectively by leveraging the latest cloud technologies, months before their competitors. With leading submissions for both MLPerf Inference v5.0 and Training v5.0 benchmarks and the sole cloud provider ranked in the Platinum tier of SemiAnalysis' ClusterMAX, CoreWeave sets the standard for AI infrastructure performance across the entire cloud stack. About CoreWeave CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. The company's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and Europe. CoreWeave was ranked as one of the TIME100 most influential companies and featured on Forbes Cloud 100 ranking in 2024. Learn more at www.coreweave.com. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information. SOURCE CoreWeave WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM? 440k+ Newsrooms & Influencers 9k+ Digital Media Outlets 270k+ Journalists Opted In

Related News