Artwork

İçerik The Gradient tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan The Gradient veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs

1:59:03
 
Paylaş
 

Manage episode 399949426 series 2975159
İçerik The Gradient tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan The Gradient veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 bölüm

Artwork
iconPaylaş
 
Manage episode 399949426 series 2975159
İçerik The Gradient tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan The Gradient veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

131 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi