Artwork

İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

“Frontier Models are Capable of In-context Scheming” by Marius Hobbhahn, AlexMeinke, Bronson Schoen

14:46
 
Paylaş
 

Manage episode 454188092 series 3364758
İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

478 bölüm

Artwork
iconPaylaş
 
Manage episode 454188092 series 3364758
İçerik LessWrong tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan LessWrong veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
This is a brief summary of what we believe to be the most important takeaways from our new paper and from our findings shown in the o1 system card. We also specifically clarify what we think we did NOT show.
Paper: https://www.apolloresearch.ai/research/scheming-reasoning-evaluations
Twitter about paper: https://x.com/apolloaisafety/status/1864735819207995716
Twitter about o1 system card: https://x.com/apolloaisafety/status/1864737158226928124
What we think the most important findings are
Models are now capable enough to do in-context scheming reasoning
We say an AI system is “scheming” if it covertly pursues misaligned goals, hiding its true capabilities and
objectives. We think that in order to scheme, models likely need to be goal-directed, situationally aware, and capable enough to reason about scheming as a strategy. In principle, models might acquire situational awareness and stable long-term goals during training, and then scheme in pursuit of those goals. We only study the simpler case of “in-context scheming,” where [...]
---
Outline:
(00:28) What we think the most important findings are
(00:33) Models are now capable enough to do in-context scheming reasoning
(03:53) The scheming actions can sometimes be quite subtle
(05:05) Models sometimes double down on their deception
(06:35) Models sometimes scheme without any goal nudging
(07:34) Sonnet-3.5 and opus-3 can sandbag in pursuit of helpfulness
(08:22) Sonnet-3.5 can sandbag without relying on CoT reasoning
(09:28) What we are not claiming:
(10:33) Takeaways
(10:36) Chain-of-thought monitoring
(11:42) Chain-of-thought access
(13:27) Inability safety cases now have to be more specific
The original text contained 7 images which were described by AI.
---
First published:
December 5th, 2024
Source:
https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming
---
Narrated by TYPE III AUDIO.
---
Images from the article:
undefined
undefined
  continue reading

478 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi

Keşfederken bu şovu dinleyin
Çal