A Conversation with Gudy 3: The Next Generation of Large Language Models
So, let's see what our LLM has to say. As we can see, it's a standard text answer, but it's also multimodal, which means we have this great visual of the chest X-ray here. I'm not good at reading X-rays, so I'll spare you my typing skills and do a little cut and pasting. The nice thing about this multimodal LLM is that we can ask it questions to further illustrate what's going on here.
This LLM is actually going to analyze the image and tell us more about this hazy opacity, such as it is. You can see here that it says it's down in the lower left. Again, just a great example of how multimodal LLMs are delivering incredible results. We're not just talking about price; we're also talking about performance and efficiency.
Gudy 3 Architecture: The Future of Large Language Models
Gudy 3 architecture is the only ML perf Benchmark alternative to H100s for LLM training and inference. And Gudy 3 only makes it stronger. We're projected to deliver 40% faster time to train than H100s and 1.5x versus H20s. Faster inference means that we can process information more quickly, which is essential for many applications.
Gudy 3 is expected to deliver 2x the performance per dollar compared to H100s. This means that our customers will get more bang for their buck. We're highly scalable and use open industry standards like Ethernet. We also support all of the expected open-source frameworks like PyTorch, which is great news for developers.
The Ecosystem Behind Gudy 3
We have hundreds of thousands of models available on Hugging Face for Gudy, and with our Developer Cloud, you can experience Gudy capabilities firsthand. Easily accessible and readily available, but of course, this is just the beginning. The entire ecosystem is lining up behind Gudy 3.
Launching Zeon 6 with ecores: The Future of Data Centers
We're launching Zeon 6 with ecores today, which we see as an essential upgrade for modern data centers. High core count, high density, exceptional performance per watt – it's all here. And this is our first product on Intel 3. We're continuing our march back to process technology competitiveness and leadership.
Next Year: The Future of Data Centers
We'll be bringing the second generation of Zeon 6 with ecores later this year, which will feature a whopping 288 cores. This will enable a stunning 6:1 consolidation ratio, better than anything we've seen in the industry. We're not stopping there; we'll also see even more innovation and advancements.
The Impact of Gudy 3 and Zeon 6
So, what does this mean for you? If just 500 data centers were upgraded with what we just saw, this would power almost 1.4 million Taiwanese households for a year or 3.7 million cars off the road for a year. By bringing the second generation of Zeon 6 with ecores later this year, we'll be able to deliver an even more significant impact on sustainability and performance.
Conclusion
Gudy 3 is here, and it's changing the game. With its exceptional performance per dollar and throughput, this LLM is set to revolutionize many industries. And as for Zeon 6 with ecores, this is just the beginning of a new era in data center technology. We're committed to innovation and sustainability, and we can't wait to see what the future holds.
The Power of Gudy 3
I'd like you to fill this rack right with the equivalent compute capability of the Gen 2 using Gen 6. Give me a minute or two, I'll make it happen. Okay, get with it, come on! Hop to it, buddy!
This is important to think about, especially when it comes to data centers. Every data center provider I know today is being crushed by how they upgrade and expand their footprint. The space the flexibility – for high-performance computing they have more demands for AI in the data center. Having a processor with 144 cores versus 28 cores for Gen 2 gives them the ability to both condense as well as attack these new workloads with performance and efficiency that was never seen before.
That's all from today, folks. We hope you enjoyed this conversation with Gudy 3 and Zeon 6 with ecores. Stay tuned for more updates on the latest advancements in large language models and data center technology!