Artwork

İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Gary Marcus' keynote at AGI-24

1:12:16
 
Paylaş
 

Manage episode 434783790 series 2803422
İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Prof Gary Marcus revisited his keynote from AGI-21, noting that many of the issues he highlighted then are still relevant today despite significant advances in AI.

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

Gary Marcus criticized current large language models (LLMs) and generative AI for their unreliability, tendency to hallucinate, and inability to truly understand concepts.

Marcus argued that the AI field is experiencing diminishing returns with current approaches, particularly the "scaling hypothesis" that simply adding more data and compute will lead to AGI.

He advocated for a hybrid approach to AI that combines deep learning with symbolic AI, emphasizing the need for systems with deeper conceptual understanding.

Marcus highlighted the importance of developing AI with innate understanding of concepts like space, time, and causality.

He expressed concern about the moral decline in Silicon Valley and the rush to deploy potentially harmful AI technologies without adequate safeguards.

Marcus predicted a possible upcoming "AI winter" due to inflated valuations, lack of profitability, and overhyped promises in the industry.

He stressed the need for better regulation of AI, including transparency in training data, full disclosure of testing, and independent auditing of AI systems.

Marcus proposed the creation of national and global AI agencies to oversee the development and deployment of AI technologies.

He concluded by emphasizing the importance of interdisciplinary collaboration, focusing on robust AI with deep understanding, and implementing smart, agile governance for AI and AGI.

YT Version (very high quality filmed)

https://youtu.be/91SK90SahHc

Pre-order Gary's new book here:

Taming Silicon Valley: How We Can Ensure That AI Works for Us

https://amzn.to/4fO46pY

Filmed at the AGI-24 conference:

https://agi-conf.org/2024/

TOC:

00:00:00 Introduction

00:02:34 Introduction by Ben G

00:05:17 Gary Marcus begins talk

00:07:38 Critiquing current state of AI

00:12:21 Lack of progress on key AI challenges

00:16:05 Continued reliability issues with AI

00:19:54 Economic challenges for AI industry

00:25:11 Need for hybrid AI approaches

00:29:58 Moral decline in Silicon Valley

00:34:59 Risks of current generative AI

00:40:43 Need for AI regulation and governance

00:49:21 Concluding thoughts

00:54:38 Q&A: Cycles of AI hype and winters

01:00:10 Predicting a potential AI winter

01:02:46 Discussion on interdisciplinary approach

01:05:46 Question on regulating AI

01:07:27 Ben G's perspective on AI winter

  continue reading

197 bölüm

Artwork
iconPaylaş
 
Manage episode 434783790 series 2803422
İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Prof Gary Marcus revisited his keynote from AGI-21, noting that many of the issues he highlighted then are still relevant today despite significant advances in AI.

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

Gary Marcus criticized current large language models (LLMs) and generative AI for their unreliability, tendency to hallucinate, and inability to truly understand concepts.

Marcus argued that the AI field is experiencing diminishing returns with current approaches, particularly the "scaling hypothesis" that simply adding more data and compute will lead to AGI.

He advocated for a hybrid approach to AI that combines deep learning with symbolic AI, emphasizing the need for systems with deeper conceptual understanding.

Marcus highlighted the importance of developing AI with innate understanding of concepts like space, time, and causality.

He expressed concern about the moral decline in Silicon Valley and the rush to deploy potentially harmful AI technologies without adequate safeguards.

Marcus predicted a possible upcoming "AI winter" due to inflated valuations, lack of profitability, and overhyped promises in the industry.

He stressed the need for better regulation of AI, including transparency in training data, full disclosure of testing, and independent auditing of AI systems.

Marcus proposed the creation of national and global AI agencies to oversee the development and deployment of AI technologies.

He concluded by emphasizing the importance of interdisciplinary collaboration, focusing on robust AI with deep understanding, and implementing smart, agile governance for AI and AGI.

YT Version (very high quality filmed)

https://youtu.be/91SK90SahHc

Pre-order Gary's new book here:

Taming Silicon Valley: How We Can Ensure That AI Works for Us

https://amzn.to/4fO46pY

Filmed at the AGI-24 conference:

https://agi-conf.org/2024/

TOC:

00:00:00 Introduction

00:02:34 Introduction by Ben G

00:05:17 Gary Marcus begins talk

00:07:38 Critiquing current state of AI

00:12:21 Lack of progress on key AI challenges

00:16:05 Continued reliability issues with AI

00:19:54 Economic challenges for AI industry

00:25:11 Need for hybrid AI approaches

00:29:58 Moral decline in Silicon Valley

00:34:59 Risks of current generative AI

00:40:43 Need for AI regulation and governance

00:49:21 Concluding thoughts

00:54:38 Q&A: Cycles of AI hype and winters

01:00:10 Predicting a potential AI winter

01:02:46 Discussion on interdisciplinary approach

01:05:46 Question on regulating AI

01:07:27 Ben G's perspective on AI winter

  continue reading

197 bölüm

ทุกตอน

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi

Keşfederken bu şovu dinleyin
Çal