Artwork

İçerik Demetrios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Demetrios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

How to Systematically Test and Evaluate Your LLMs Apps // Gideon Mendels // #269

1:01:42
 
Paylaş
 

Manage episode 445757119 series 3241972
İçerik Demetrios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Demetrios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Gideon Mendels is the Chief Executive Officer at Comet, the leading solution for managing machine learning workflows. How to Systematically Test and Evaluate Your LLMs Apps

// MLOps Podcast #269 with Gideon Mendels, CEO of Comet.

// Abstract

When building LLM Applications, Developers need to take a hybrid approach from both ML and SW Engineering best practices. They need to define eval metrics and track their entire experimentation to see what is and is not working. They also need to define comprehensive unit tests for their particular use case so they can confidently check if their LLM App is ready to be deployed.

// Bio

Gideon Mendels is the CEO and co-founder of Comet, the leading solution for managing machine learning workflows from experimentation to production. He is a computer scientist, ML researcher and entrepreneur at his core. Before Comet, Gideon co-founded GroupWize, where they trained and deployed NLP models processing billions of chats. His journey with NLP and Speech Recognition models began at Columbia University and Google, where he worked on hate speech and deception detection.

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related Links

Website: https://www.comet.com/site/

All the Hard Stuff with LLMs in Product Development // Phillip Carter // MLOps Podcast #170: https://youtu.be/DZgXln3v85s

Opik by Comet: https://www.comet.com/site/products/opik/

--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Gideon on LinkedIn: https://www.linkedin.com/in/gideon-mendels/

Timestamps:

[00:00] Gideon's preferred coffee

[00:17] Takeaways

[01:50] A huge shout-out to Comet ML for sponsoring this episode!

[02:09] Please like, share, leave a review, and subscribe to our MLOps channels!

[03:30] Evaluation metrics in AI

[06:55] LLM Evaluation in Practice

[10:57] LLM testing methodologies

[16:56] LLM as a judge

[18:53] OPIC track function overview

[20:33] Tracking user response value

[26:32] Exploring AI metrics integration

[29:05] Experiment tracking and LLMs

[34:27] Micro Macro collaboration in AI

[38:20] RAG Pipeline Reproducibility Snapshot

[40:15] Collaborative experiment tracking

[45:29] Feature flags in CI/CD

[48:55] Labeling challenges and solutions

[54:31] LLM output quality alerts

[56:32] Anomaly detection in model outputs

[1:01:07] Wrap up

  continue reading

490 bölüm

Artwork
iconPaylaş
 
Manage episode 445757119 series 3241972
İçerik Demetrios tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Demetrios veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Gideon Mendels is the Chief Executive Officer at Comet, the leading solution for managing machine learning workflows. How to Systematically Test and Evaluate Your LLMs Apps

// MLOps Podcast #269 with Gideon Mendels, CEO of Comet.

// Abstract

When building LLM Applications, Developers need to take a hybrid approach from both ML and SW Engineering best practices. They need to define eval metrics and track their entire experimentation to see what is and is not working. They also need to define comprehensive unit tests for their particular use case so they can confidently check if their LLM App is ready to be deployed.

// Bio

Gideon Mendels is the CEO and co-founder of Comet, the leading solution for managing machine learning workflows from experimentation to production. He is a computer scientist, ML researcher and entrepreneur at his core. Before Comet, Gideon co-founded GroupWize, where they trained and deployed NLP models processing billions of chats. His journey with NLP and Speech Recognition models began at Columbia University and Google, where he worked on hate speech and deception detection.

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related Links

Website: https://www.comet.com/site/

All the Hard Stuff with LLMs in Product Development // Phillip Carter // MLOps Podcast #170: https://youtu.be/DZgXln3v85s

Opik by Comet: https://www.comet.com/site/products/opik/

--------------- ✌️Connect With Us ✌️ -------------

Join our Slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Gideon on LinkedIn: https://www.linkedin.com/in/gideon-mendels/

Timestamps:

[00:00] Gideon's preferred coffee

[00:17] Takeaways

[01:50] A huge shout-out to Comet ML for sponsoring this episode!

[02:09] Please like, share, leave a review, and subscribe to our MLOps channels!

[03:30] Evaluation metrics in AI

[06:55] LLM Evaluation in Practice

[10:57] LLM testing methodologies

[16:56] LLM as a judge

[18:53] OPIC track function overview

[20:33] Tracking user response value

[26:32] Exploring AI metrics integration

[29:05] Experiment tracking and LLMs

[34:27] Micro Macro collaboration in AI

[38:20] RAG Pipeline Reproducibility Snapshot

[40:15] Collaborative experiment tracking

[45:29] Feature flags in CI/CD

[48:55] Labeling challenges and solutions

[54:31] LLM output quality alerts

[56:32] Anomaly detection in model outputs

[1:01:07] Wrap up

  continue reading

490 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi

Keşfederken bu şovu dinleyin
Çal