Artwork

İçerik Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish

38:44
 
Paylaş
 

Manage episode 385014002 series 2503772
İçerik Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

117 bölüm

Artwork
iconPaylaş
 
Manage episode 385014002 series 2503772
İçerik Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Center for Humane Technology, Tristan Harris, Aza Raskin, and The Center for Humane Technology veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech?

Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.

RECOMMENDED MEDIA

Open-Sourcing Highly Capable Foundation Models

This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI

BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B

This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities

Centre for the Governance of AI

Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI

AI: Futures and Responsibility (AI:FAR)

Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity

Palisade Research

Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever

RECOMMENDED YUA EPISODES

A First Step Toward AI Regulation with Tom Wheeler

No One is Immune to AI Harms with Dr. Joy Buolamwini

Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

The AI Dilemma

Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

  continue reading

117 bölüm

Усі епізоди

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi