Artwork

İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer

29:25
 
Paylaş
 

Manage episode 376231457 series 3461851
İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Send us a text

Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems.

Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices.
This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 bölüm

Artwork
iconPaylaş
 
Manage episode 376231457 series 3461851
İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Send us a text

Joining us for the first time as a guest host is Protect AI’s CEO and founder, Ian Swanson. Ian is joined this week by Rob van der Veer, a pioneer in AI and security. Rob gave a presentation at Global AppSec Dublin earlier this year called “Attacking and Protecting Artificial Intelligence” which was a large inspiration for this episode. In it, Rob talks about the lack of security considerations and processes in AI production systems compared to traditional software development, and the unique challenges and particularities of building security into AI and machine learning systems.

Together in this episode, Ian and Rob dive into things like practical threats to ML systems, the transition from MLOps to MLSecOps, the [upcoming] ISO 5338 standard on AI engineering, and what organizations can do if they are looking to mature their AI/ML security practices.
This is a great dialogue and exchange of ideas overall between two super knowledgeable people in this industry. So thank you so much to Ian and to Rob for joining us on The MLSecOps Podcast this week.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

41 bölüm

Todos os episódios

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi

Keşfederken bu şovu dinleyin
Çal