Player FM uygulamasıyla çevrimdışı Player FM !
Dinlemeye Değer Podcast'ler
SPONSOR


ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt
Manage episode 375063826 series 3461851
This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.
But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.
He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups.
Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.
Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
49 bölüm
Manage episode 375063826 series 3461851
This week we’re talking about the role of fairness in AI/ML. It is becoming increasingly apparent that incorporating fairness into our AI systems and machine learning models while mitigating bias and potential harms is a critical challenge. Not only that, it’s a challenge that demands a collective effort to ensure the responsible, secure, and equitable development of AI and machine learning systems.
But what does this actually mean in practice? To find out, we spoke with Nick Schmidt, the Chief Technology and Innovation Officer at SolasAI. In this week’s episode, Nick reviews some key principles related to model governance and fairness, from things like accountability and ownership all the way to model deployment and monitoring.
He also discusses real life examples of when machine learning algorithms have demonstrated bias and disparity, along with how those outcomes could be harmful to individuals or groups.
Later in the episode, Nick offers some insightful advice for organizations who are assessing their AI security risk related to algorithmic disparities and unfair models.
Additional tools and resources to check out:
AI Radar
ModelScan
NB Defense
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
49 bölüm
Tüm bölümler
×
1 Unpacking the Cloud Security Alliance AI Controls Matrix 35:53

1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53

1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19

1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41

1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15

1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06

1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59

1 The MLSecOps Podcast Season 2 Finale 40:54

1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37

1 MLSecOps Culture: Considerations for AI Development and Security Teams 38:44

1 Practical Offensive and Adversarial ML for Red Teams 35:24

1 Expert Talk from RSA Conference: Securing Generative AI 25:42

1 Practical Foundations for Securing AI 38:10

1 Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex 31:04

1 AI Threat Research: Spotlight on the Huntr Community 31:48

1 Securing AI: The Role of People, Processes & Tools in MLSecOps 37:16

1 ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance 35:30

1 Finding a Balance: LLMs, Innovation, and Security 41:56

1 Secure AI Implementation and Governance 38:37

1 Risk Management and Enhanced Security Practices for AI Systems 38:08

1 Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations 41:19

1 From Risk to Responsibility: Violet Teaming in AI; With Guest: Alexander Titus 43:20

1 Cybersecurity of Tomorrow: Exploring the Future of Security and Governance for AI Systems; With Guest: Martin Stanley, CISSP 39:45

1 AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 2) 42:28

1 AI/ML Security in Retrospect: Insights from Season 1 of The MLSecOps Podcast (Part 1) 37:10

1 A Holistic Approach to Understanding the AI Lifecycle and Securing ML Systems: Protecting AI Through People, Processes & Technology; With Guest: Rob van der Veer 29:25

1 ML Model Fairness: Measuring and Mitigating Algorithmic Disparities; With Guest: Nick Schmidt 35:33

1 Exploring AI/ML Security Risks: At Black Hat USA 2023 with Protect AI 35:20

1 Everything You Need to Know About Hacker Summer Camp 2023 38:59

1 Privacy Engineering: Safeguarding AI & ML Systems in a Data-Driven Era; With Guest Katharine Jarmul 46:44

1 The Intersection of MLSecOps and DataPrepOps; With Guest: Jennifer Prendki, PhD 34:40

1 The Evolved Adversarial ML Landscape; With Guest: Apostol Vassilev, NIST 30:30

1 Navigating the Challenges of LLMs: Guardrails AI to the Rescue; With Guest: Shreya Rajpal 39:16

1 Indirect Prompt Injections and Threat Modeling of LLM Applications; With Guest: Kai Greshake 36:14

1 Responsible AI: Defining, Implementing, and Navigating the Future; With Guest: Diya Wynn 33:17

1 ML Security: AI Incident Response Plans and Enterprise Risk Culture; With Guest: Patrick Hall 38:49

1 AI Audits: Uncovering Risks in ML Systems; With Guest: Shea Brown, PhD 41:02

1 MLSecOps: Red Teaming, Threat Modeling, and Attack Methods of AI Apps; With Guest: Johann Rehberger 40:29

1 MITRE ATLAS: Defining the ML System Attack Chain and Need for MLSecOps; With Guest: Christina Liaghati, PhD 39:48

1 Unpacking AI Bias: Impact, Detection, Prevention, and Policy; With Guest: Dr. Cari Miller, MBA, FHCA 39:22

1 Just How Practical Are Data Poisoning Attacks? With Guest: Dr. Florian Tramèr 47:35

1 A Closer Look at "Adversarial Robustness for Machine Learning" With Guest: Pin-Yu Chen 38:39

1 A Closer Look at "Securing AIML Systems in the Age of Information Warfare" With Guest: Disesdi Susanna Cox 30:50
Player FM'e Hoş Geldiniz!
Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.