Artwork

İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

16 - Preparing for Debate AI with Geoffrey Irving

1:04:49
 
Paylaş
 

Manage episode 333232020 series 2844728
İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Many people in the AI alignment space have heard of AI safety via debate - check out AXRP episode 6 (axrp.net/episode/2021/04/08/episode-6-debate-beth-barnes.html) if you need a primer. But how do we get language models to the stage where they can usefully implement debate? In this episode, I talk to Geoffrey Irving about the role of language models in AI safety, as well as three projects he's done that get us closer to making debate happen: using language models to find flaws in themselves, getting language models to back up claims they make with citations, and figuring out how uncertain language models should be about the quality of various answers.

Topics we discuss, and timestamps:

- 00:00:48 - Status update on AI safety via debate

- 00:10:24 - Language models and AI safety

- 00:19:34 - Red teaming language models with language models

- 00:35:31 - GopherCite

- 00:49:10 - Uncertainty Estimation for Language Reward Models

- 01:00:26 - Following Geoffrey's work, and working with him

The transcript: axrp.net/episode/2022/07/01/episode-16-preparing-for-debate-ai-geoffrey-irving.html

Geoffrey's twitter: twitter.com/geoffreyirving

Research we discuss:

- Red Teaming Language Models With Language Models: arxiv.org/abs/2202.03286

- Teaching Language Models to Support Answers with Verified Quotes, aka GopherCite: arxiv.org/abs/2203.11147

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- AI Safety via Debate: arxiv.org/abs/1805.00899

- Writeup: progress on AI safety via debate: lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1

- Eliciting Latent Knowledge: ai-alignment.com/eliciting-latent-knowledge-f977478608fc

- Training Compute-Optimal Large Language Models, aka Chinchilla: arxiv.org/abs/2203.15556

  continue reading

39 bölüm

Artwork
iconPaylaş
 
Manage episode 333232020 series 2844728
İçerik Daniel Filan tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Daniel Filan veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Many people in the AI alignment space have heard of AI safety via debate - check out AXRP episode 6 (axrp.net/episode/2021/04/08/episode-6-debate-beth-barnes.html) if you need a primer. But how do we get language models to the stage where they can usefully implement debate? In this episode, I talk to Geoffrey Irving about the role of language models in AI safety, as well as three projects he's done that get us closer to making debate happen: using language models to find flaws in themselves, getting language models to back up claims they make with citations, and figuring out how uncertain language models should be about the quality of various answers.

Topics we discuss, and timestamps:

- 00:00:48 - Status update on AI safety via debate

- 00:10:24 - Language models and AI safety

- 00:19:34 - Red teaming language models with language models

- 00:35:31 - GopherCite

- 00:49:10 - Uncertainty Estimation for Language Reward Models

- 01:00:26 - Following Geoffrey's work, and working with him

The transcript: axrp.net/episode/2022/07/01/episode-16-preparing-for-debate-ai-geoffrey-irving.html

Geoffrey's twitter: twitter.com/geoffreyirving

Research we discuss:

- Red Teaming Language Models With Language Models: arxiv.org/abs/2202.03286

- Teaching Language Models to Support Answers with Verified Quotes, aka GopherCite: arxiv.org/abs/2203.11147

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- AI Safety via Debate: arxiv.org/abs/1805.00899

- Writeup: progress on AI safety via debate: lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1

- Eliciting Latent Knowledge: ai-alignment.com/eliciting-latent-knowledge-f977478608fc

- Training Compute-Optimal Large Language Models, aka Chinchilla: arxiv.org/abs/2203.15556

  continue reading

39 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi