Redefining AI is the 2024 New York Digital Award winning tech podcast! Discover a whole new take on Artificial Intelligence in joining host Lauren Hawker Zafer, a top voice in Artificial Intelligence on LinkedIn, for insightful chats that unravel the fascinating world of tech innovation, use case exploration and AI knowledge. Dive into candid discussions with accomplished industry experts and established academics. With each episode, you'll expand your grasp of cutting-edge technologies and ...
…
continue reading
İçerik Charles M Wood tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Charles M Wood veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !
Player FM uygulamasıyla çevrimdışı Player FM !
Challenges and Solutions in Managing Code Security for ML Developers - ML 175
Manage episode 451476040 series 2977446
İçerik Charles M Wood tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Charles M Wood veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Today, join Michael and Ben as they delve into crucial topics surrounding code security and the safe execution of machine learning models. This episode focuses on preventing accidental key leaks in notebooks, creating secure environments for code execution, and the pros and cons of various isolation methods like VMs, containers, and micro VMs.
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
…
continue reading
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
208 bölüm
Manage episode 451476040 series 2977446
İçerik Charles M Wood tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Charles M Wood veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Today, join Michael and Ben as they delve into crucial topics surrounding code security and the safe execution of machine learning models. This episode focuses on preventing accidental key leaks in notebooks, creating secure environments for code execution, and the pros and cons of various isolation methods like VMs, containers, and micro VMs.
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
…
continue reading
They explore the challenges of evaluating and executing generated code, highlighting the risks of running arbitrary Python code and the importance of secure evaluation processes. Ben shares his experiences and best practices, emphasizing human evaluation and secure virtual environments to mitigate risks.
The episode also includes an in-depth discussion on developing new projects with a focus on proper engineering procedures, and the sophisticated efforts behind Databricks' Genie service and MLflow's RunLLM. Finally, Ben and Michael explore the potential of fine-tuning machine learning models, creating high-quality datasets, and the complexities of managing code execution with AI.
Tune in for all this and more as we navigate the secure pathways to responsible and effective machine learning development.
Socials
Become a supporter of this podcast: https://www.spreaker.com/podcast/adventures-in-machine-learning--6102041/support.
208 bölüm
Tüm bölümler
×Player FM'e Hoş Geldiniz!
Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.