Artwork

İçerik Bruce Bracken tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Bruce Bracken veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Unlocking Backdoor AI Poisoning with Dmitrijs Trizna

46:53
 
Paylaş
 

Manage episode 428128796 series 3486243
İçerik Bruce Bracken tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Bruce Bracken veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.

In This Episode You Will Learn:

  • The concept of Trustworthy AI and its importance in today's technology landscape
  • How threat actors exploit AI vulnerabilities using backdoor poisoning techniques
  • The role of frequency and unusual inputs in compromising AI model integrity

Some Questions We Ask:

  • Could you elaborate on the resilience of gradient-boosted decision trees in AI security?
  • What interdisciplinary approaches are necessary for effective AI security?
  • How do we determine acceptable thresholds for AI model degradation in security contexts?

Resources:

View Dmitrijs Trizna on LinkedIn

View Wendy Zenone on LinkedIn

View Nic Fillingham on LinkedIn

Related Microsoft Podcasts:

Discover and follow other Microsoft podcasts at microsoft.com/podcasts

The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.

  continue reading

41 bölüm

Artwork
iconPaylaş
 
Manage episode 428128796 series 3486243
İçerik Bruce Bracken tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Bruce Bracken veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Dmitrijs Trizna, Security Researcher at Microsoft joins Nic Fillingham on this week's episode of The BlueHat Podcast. Dmitrijs explains his role at Microsoft, focusing on AI-based cyber threat detection for Kubernetes and Linux platforms. Dmitrijs explores the complex landscape of securing AI systems, focusing on the emerging challenges of Trustworthy AI. He delves into how threat actors exploit vulnerabilities through techniques like backdoor poisoning, using gradual benign inputs to deceive AI models. Dmitrijs highlights the multidisciplinary approach required for effective AI security, combining AI expertise with rigorous security practices. He also discusses the resilience of gradient-boosted decision trees against such attacks and shares insights from his recent presentation at Blue Hat India, where he noted a strong interest in AI security.

In This Episode You Will Learn:

  • The concept of Trustworthy AI and its importance in today's technology landscape
  • How threat actors exploit AI vulnerabilities using backdoor poisoning techniques
  • The role of frequency and unusual inputs in compromising AI model integrity

Some Questions We Ask:

  • Could you elaborate on the resilience of gradient-boosted decision trees in AI security?
  • What interdisciplinary approaches are necessary for effective AI security?
  • How do we determine acceptable thresholds for AI model degradation in security contexts?

Resources:

View Dmitrijs Trizna on LinkedIn

View Wendy Zenone on LinkedIn

View Nic Fillingham on LinkedIn

Related Microsoft Podcasts:

Discover and follow other Microsoft podcasts at microsoft.com/podcasts

The BlueHat Podcast is produced by Microsoft and distributed as part of N2K media network.

  continue reading

41 bölüm

Minden epizód

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi