Artwork

İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

“Fields that I reference when thinking about AI takeover prevention” by Buck

20:01
 
Paylaş
 

Manage episode 434444221 series 3364760
İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.Is AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
August 13th, 2024
Source:
https://www.lesswrong.com/posts/xXXXkGGKorTNmcYdb/fields-that-i-reference-when-thinking-about-ai-takeover
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

355 bölüm

Artwork
iconPaylaş
 
Manage episode 434444221 series 3364760
İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.Is AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
August 13th, 2024
Source:
https://www.lesswrong.com/posts/xXXXkGGKorTNmcYdb/fields-that-i-reference-when-thinking-about-ai-takeover
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefinedApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  continue reading

355 bölüm

Усі епізоди

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi