Artwork

İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

30 - AI Security with Jeffrey Ladish

2:15:44
 
Paylaş
 

Manage episode 415566074 series 2844728
İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Top labs use various forms of "safety training" on models before their release to make sure they don't do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don't get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish about security and AI.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

0:00:38 - Fine-tuning away safety training

0:13:50 - Dangers of open LLMs vs internet search

0:19:52 - What we learn by undoing safety filters

0:27:34 - What can you do with jailbroken AI?

0:35:28 - Security of AI model weights

0:49:21 - Securing against attackers vs AI exfiltration

1:08:43 - The state of computer security

1:23:08 - How AI labs could be more secure

1:33:13 - What does Palisade do?

1:44:40 - AI phishing

1:53:32 - More on Palisade's work

1:59:56 - Red lines in AI development

2:09:56 - Making AI legible

2:14:08 - Following Jeffrey's research

The transcript: axrp.net/episode/2024/04/30/episode-30-ai-security-jeffrey-ladish.html

Palisade Research: palisaderesearch.org

Jeffrey's Twitter/X account: twitter.com/JeffLadish

Main papers we discussed:

- LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B: arxiv.org/abs/2310.20624

- BadLLaMa: Cheaply Removing Safety Fine-tuning From LLaMa 2-Chat 13B: arxiv.org/abs/2311.00117

- Securing Artificial Intelligence Model Weights: rand.org/pubs/working_papers/WRA2849-1.html

Other links:

- Llama 2: Open Foundation and Fine-Tuned Chat Models: https://arxiv.org/abs/2307.09288

- Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!: https://arxiv.org/abs/2310.03693

- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models: https://arxiv.org/abs/2310.02949

- On the Societal Impact of Open Foundation Models (Stanford paper on marginal harms from open-weight models): https://crfm.stanford.edu/open-fms/

- The Operational Risks of AI in Large-Scale Biological Attacks (RAND): https://www.rand.org/pubs/research_reports/RRA2977-2.html

- Preventing model exfiltration with upload limits: https://www.alignmentforum.org/posts/rf66R4YsrCHgWx9RG/preventing-model-exfiltration-with-upload-limits

- A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

- In-browser transformer inference: https://aiserv.cloud/

- Anatomy of a rental phishing scam: https://jeffreyladish.com/anatomy-of-a-rental-phishing-scam/

- Causal Scrubbing: a method for rigorously testing interpretability hypotheses: https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

36 bölüm

Artwork
iconPaylaş
 
Manage episode 415566074 series 2844728
İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Top labs use various forms of "safety training" on models before their release to make sure they don't do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don't get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish about security and AI.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

Topics we discuss, and timestamps:

0:00:38 - Fine-tuning away safety training

0:13:50 - Dangers of open LLMs vs internet search

0:19:52 - What we learn by undoing safety filters

0:27:34 - What can you do with jailbroken AI?

0:35:28 - Security of AI model weights

0:49:21 - Securing against attackers vs AI exfiltration

1:08:43 - The state of computer security

1:23:08 - How AI labs could be more secure

1:33:13 - What does Palisade do?

1:44:40 - AI phishing

1:53:32 - More on Palisade's work

1:59:56 - Red lines in AI development

2:09:56 - Making AI legible

2:14:08 - Following Jeffrey's research

The transcript: axrp.net/episode/2024/04/30/episode-30-ai-security-jeffrey-ladish.html

Palisade Research: palisaderesearch.org

Jeffrey's Twitter/X account: twitter.com/JeffLadish

Main papers we discussed:

- LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B: arxiv.org/abs/2310.20624

- BadLLaMa: Cheaply Removing Safety Fine-tuning From LLaMa 2-Chat 13B: arxiv.org/abs/2311.00117

- Securing Artificial Intelligence Model Weights: rand.org/pubs/working_papers/WRA2849-1.html

Other links:

- Llama 2: Open Foundation and Fine-Tuned Chat Models: https://arxiv.org/abs/2307.09288

- Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!: https://arxiv.org/abs/2310.03693

- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models: https://arxiv.org/abs/2310.02949

- On the Societal Impact of Open Foundation Models (Stanford paper on marginal harms from open-weight models): https://crfm.stanford.edu/open-fms/

- The Operational Risks of AI in Large-Scale Biological Attacks (RAND): https://www.rand.org/pubs/research_reports/RRA2977-2.html

- Preventing model exfiltration with upload limits: https://www.alignmentforum.org/posts/rf66R4YsrCHgWx9RG/preventing-model-exfiltration-with-upload-limits

- A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

- In-browser transformer inference: https://aiserv.cloud/

- Anatomy of a rental phishing scam: https://jeffreyladish.com/anatomy-of-a-rental-phishing-scam/

- Causal Scrubbing: a method for rigorously testing interpretability hypotheses: https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

36 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi