Artwork

İçerik Massive Studios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Massive Studios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Sizing AI Workloads

33:34
 
Paylaş
 

Manage episode 414210643 series 2285741
İçerik Massive Studios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Massive Studios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

Bölümler

1. Sizing AI Workloads (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:15:35)

3. (Cont.) Sizing AI Workloads (00:16:13)

911 bölüm

Artwork

Sizing AI Workloads

The Cloudcast

1,286 subscribers

published

iconPaylaş
 
Manage episode 414210643 series 2285741
İçerik Massive Studios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Massive Studios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

John Yue (CEO & Co-Founder @ inference.ai) discusses AI workload sizing, matching GPUs to workloads, availability of GPUs vs. costs, and more.
SHOW: 815
CLOUD NEWS OF THE WEEK -
http://bit.ly/cloudcast-cnotw
NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST -
"CLOUDCAST BASICS"
SHOW NOTES:

Topic 1 - Our topic for today is sizing and IaaS hosting for AI/ML. We’ve covered a lot of basics lately, today we’re going to dig deeper. There is a surprising amount of depth to AI sizing, and it isn’t just speeds and feeds of GPUs. We’d like to welcome John Yue (CEO & Co-Founder @ inference.ai) for this discussion. John, welcome to the show
Topic 2 - Let’s start with sizing, I’ve talked to a lot of customers recently with my day job, and it is amazing how deep AI/ML sizing can go. First, you have to size for training/fine-tuning differently than you would for the inference stage. Second, some just think, pick the biggest GPUs you can afford and go. How should your customers approach this? (GPU’s, software dependencies, etc.)
Topic 2a - Follow-up question what are the business side, what are the business parameters that need to be considered? (budget, cost efficiency, latency/response time, timeline, etc.)
Topic 3 - The whole process can be overwhelming and as we mentioned, some organizations may not think of everything. You recently announced a chatbot to help with this exact process, ChatGPU. Tell everyone a bit about that and how it came to be.
Topic 4 - This is almost like a match-making service, correct? Everyone wants an H100, but not everyone needs or can afford an H100.
Topic 5 - How does GPU availability play into all of this? NVIDIA is sold out for something like 2 years at this point; how is that sustainable? Does everything need to run on a “Ferrari class” NVIDIA GPU?
Topic 6 - What’s next in the IaaS for AI/ML space? What does a next-generation data center for AI/ML look like? Will the Industry move away from GPUs to reduce dependence on NVIDIA?
FEEDBACK?

  continue reading

Bölümler

1. Sizing AI Workloads (00:00:00)

2. [Ad] Out-of-the-box insights from digital leaders (00:15:35)

3. (Cont.) Sizing AI Workloads (00:16:13)

911 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi