StockNews.AI
CBRS
StockNews.AI
201 days

Cerebras Launches World's Fastest DeepSeek R1 Distill Llama 70B Inference

1. Cerebras achieved 1,500 tokens/second for AI inference, outpacing GPUs significantly. 2. DeepSeek-R1's performance transforms AI reasoning into near-instantaneous responses. 3. U.S.-based processing ensures data security with zero data retention. 4. Cerebras' system enables major reductions in computation time for complex tasks. 5. The technology shows 15x speed improvement over competing platforms.

5m saved
Insight
Article

FAQ

Why Bullish?

Cerebras' technological advancements may lead to increased adoption and sales. Similar past innovations have spurred stock prices positively.

How important is it?

Advancements in AI technology can significantly influence market positioning, impacting investor sentiment.

Why Short Term?

Immediate customer access to new technology likely boosts sales quickly. Previous product launches resulted in quick stock response.

Related Companies

January 30, 2025 12:00 PM Eastern Standard Time SUNNYVALE, Calif.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, today announced record-breaking performance for DeepSeek-R1-Distill-Llama-70B inference, achieving more than 1,500 tokens per second – 57 times faster than GPU-based solutions. This unprecedented speed enables instant reasoning capabilities for one of the industry's most sophisticated open-weight models, running entirely on U.S.-based AI infrastructure with zero data retention. "DeepSeek R1 represents a new frontier in AI reasoning capabilities, and today we're making it accessible at the industry’s fastest speeds," said Hagay Lupesko, SVP of AI Cloud, Cerebras. "By achieving more than 1,500 tokens per second on our Cerebras Inference platform, we're transforming minutes-long reasoning processes into near-instantaneous responses, fundamentally changing how developers and enterprises can leverage advanced AI models." Powered by the Cerebras Wafer Scale Engine, the platform demonstrates dramatic real-world performance improvements. A standard coding prompt that takes 22 seconds on competitive platforms completes in just 1.5 seconds on Cerebras – a 15x improvement in time to result. This breakthrough enables practical deployment of sophisticated reasoning models that traditionally require extensive computation time. DeepSeek-R1-Distill-Llama-70B combines the advanced reasoning capabilities of DeepSeek's 671B parameter Mixture of Experts (MoE) model with Meta's widely-supported Llama architecture. Despite its efficient 70B parameter size, the model demonstrates superior performance on complex mathematics and coding tasks compared to larger models. "Security and privacy are paramount for enterprise AI deployment," continued Lupesko. "By processing all inference requests in U.S.-based data centers with zero data retention, we're ensuring that organizations can leverage cutting-edge AI capabilities while maintaining strict data governance standards. Data stays in the U.S. 100% of the time and belongs solely to the customer." Availability The DeepSeek-R1-Distill-Llama-70B model is available immediately through Cerebras Inference, with API access available to select customers through a developer preview program. For more information about accessing instant reasoning capabilities for your applications, visit www.cerebras.ai/contact-us. About Cerebras Systems Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest commercially available AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Cerebras Inference delivers breakthrough inference speeds, empowering customers to create cutting-edge AI applications. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit cerebras.ai or follow us on LinkedIn or X.

Related News