Artwork

İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake

36:14
 
Paylaş
 

Manage episode 364199067 series 3461851
İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now.

In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively.

Our host, Daryan Dehghanpisheh, is joined by special guest-host (Red Team Director and prior show guest) Johann Rehberger to discuss Kai’s research, including the potential real-world implications of these security breaches. They also examine contrasts to traditional security injection vulnerabilities like SQL injections.
The group also discusses the role of LLM applications in everyday workflows and the increased security risks posed by their integration into various industry systems, including military applications. The discussion then shifts to potential mitigation strategies and the future of AI red teaming and ML security.


Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.
Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

30 bölüm

Artwork
iconPaylaş
 
Manage episode 364199067 series 3461851
İçerik MLSecOps.com tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan MLSecOps.com veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

This talk makes it increasingly clear. The time for machine learning security operations - MLSecOps - is now.

In “Indirect Prompt Injections and Threat Modeling of LLM Applications,” (transcript here -> https://bit.ly/45DYMAG) we dive deep into the world of large language model (LLM) attacks and security. Our conversation with esteemed cyber security engineer and researcher, Kai Greshake, centers around the concept of indirect prompt injections, a novel adversarial attack and vulnerability in LLM-integrated applications, which Kai has explored extensively.

Our host, Daryan Dehghanpisheh, is joined by special guest-host (Red Team Director and prior show guest) Johann Rehberger to discuss Kai’s research, including the potential real-world implications of these security breaches. They also examine contrasts to traditional security injection vulnerabilities like SQL injections.
The group also discusses the role of LLM applications in everyday workflows and the increased security risks posed by their integration into various industry systems, including military applications. The discussion then shifts to potential mitigation strategies and the future of AI red teaming and ML security.


Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.
Additional tools and resources to check out:
Protect AI Radar: End-to-End AI Risk Management
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard - The Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform

  continue reading

30 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi