Artwork

İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

Pattern Recognition vs True Intelligence - Francois Chollet

2:42:54
 
Paylaş
 

Manage episode 448854919 series 2803422
İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Francois Chollet, a prominent AI expert and creator of ARC-AGI, discusses intelligence, consciousness, and artificial intelligence.

Chollet explains that real intelligence isn't about memorizing information or having lots of knowledge - it's about being able to handle new situations effectively. This is why he believes current large language models (LLMs) have "near-zero intelligence" despite their impressive abilities. They're more like sophisticated memory and pattern-matching systems than truly intelligent beings.

***

MLST IS SPONSORED BY TUFA AI LABS!

The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/

***

He introduced his "Kaleidoscope Hypothesis," which suggests that while the world seems infinitely complex, it's actually made up of simpler patterns that repeat and combine in different ways. True intelligence, he argues, involves identifying these basic patterns and using them to understand new situations.

Chollet also talked about consciousness, suggesting it develops gradually in children rather than appearing all at once. He believes consciousness exists in degrees - animals have it to some extent, and even human consciousness varies with age and circumstances (like being more conscious when learning something new versus doing routine tasks).

On AI safety, Chollet takes a notably different stance from many in Silicon Valley. He views AGI development as a scientific challenge rather than a religious quest, and doesn't share the apocalyptic concerns of some AI researchers. He argues that intelligence itself isn't dangerous - it's just a tool for turning information into useful models. What matters is how we choose to use it.

ARC-AGI Prize:

https://arcprize.org/

Francois Chollet:

https://x.com/fchollet

Shownotes:

https://www.dropbox.com/scl/fi/j2068j3hlj8br96pfa7bi/CHOLLET_FINAL.pdf?rlkey=xkbr7tbnrjdl66m246w26uc8k&st=0a4ec4na&dl=0

TOC:

1. Intelligence and Model Building

[00:00:00] 1.1 Intelligence Definition and ARC Benchmark

[00:05:40] 1.2 LLMs as Program Memorization Systems

[00:09:36] 1.3 Kaleidoscope Hypothesis and Abstract Building Blocks

[00:13:39] 1.4 Deep Learning Limitations and System 2 Reasoning

[00:29:38] 1.5 Intelligence vs. Skill in LLMs and Model Building

2. ARC Benchmark and Program Synthesis

[00:37:36] 2.1 Intelligence Definition and LLM Limitations

[00:41:33] 2.2 Meta-Learning System Architecture

[00:56:21] 2.3 Program Search and Occam's Razor

[00:59:42] 2.4 Developer-Aware Generalization

[01:06:49] 2.5 Task Generation and Benchmark Design

3. Cognitive Systems and Program Generation

[01:14:38] 3.1 System 1/2 Thinking Fundamentals

[01:22:17] 3.2 Program Synthesis and Combinatorial Challenges

[01:31:18] 3.3 Test-Time Fine-Tuning Strategies

[01:36:10] 3.4 Evaluation and Leakage Problems

[01:43:22] 3.5 ARC Implementation Approaches

4. Intelligence and Language Systems

[01:50:06] 4.1 Intelligence as Tool vs Agent

[01:53:53] 4.2 Cultural Knowledge Integration

[01:58:42] 4.3 Language and Abstraction Generation

[02:02:41] 4.4 Embodiment in Cognitive Systems

[02:09:02] 4.5 Language as Cognitive Operating System

5. Consciousness and AI Safety

[02:14:05] 5.1 Consciousness and Intelligence Relationship

[02:20:25] 5.2 Development of Machine Consciousness

[02:28:40] 5.3 Consciousness Prerequisites and Indicators

[02:36:36] 5.4 AGI Safety Considerations

[02:40:29] 5.5 AI Regulation Framework

  continue reading

238 bölüm

Artwork
iconPaylaş
 
Manage episode 448854919 series 2803422
İçerik Machine Learning Street Talk (MLST) tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Machine Learning Street Talk (MLST) veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Francois Chollet, a prominent AI expert and creator of ARC-AGI, discusses intelligence, consciousness, and artificial intelligence.

Chollet explains that real intelligence isn't about memorizing information or having lots of knowledge - it's about being able to handle new situations effectively. This is why he believes current large language models (LLMs) have "near-zero intelligence" despite their impressive abilities. They're more like sophisticated memory and pattern-matching systems than truly intelligent beings.

***

MLST IS SPONSORED BY TUFA AI LABS!

The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/

***

He introduced his "Kaleidoscope Hypothesis," which suggests that while the world seems infinitely complex, it's actually made up of simpler patterns that repeat and combine in different ways. True intelligence, he argues, involves identifying these basic patterns and using them to understand new situations.

Chollet also talked about consciousness, suggesting it develops gradually in children rather than appearing all at once. He believes consciousness exists in degrees - animals have it to some extent, and even human consciousness varies with age and circumstances (like being more conscious when learning something new versus doing routine tasks).

On AI safety, Chollet takes a notably different stance from many in Silicon Valley. He views AGI development as a scientific challenge rather than a religious quest, and doesn't share the apocalyptic concerns of some AI researchers. He argues that intelligence itself isn't dangerous - it's just a tool for turning information into useful models. What matters is how we choose to use it.

ARC-AGI Prize:

https://arcprize.org/

Francois Chollet:

https://x.com/fchollet

Shownotes:

https://www.dropbox.com/scl/fi/j2068j3hlj8br96pfa7bi/CHOLLET_FINAL.pdf?rlkey=xkbr7tbnrjdl66m246w26uc8k&st=0a4ec4na&dl=0

TOC:

1. Intelligence and Model Building

[00:00:00] 1.1 Intelligence Definition and ARC Benchmark

[00:05:40] 1.2 LLMs as Program Memorization Systems

[00:09:36] 1.3 Kaleidoscope Hypothesis and Abstract Building Blocks

[00:13:39] 1.4 Deep Learning Limitations and System 2 Reasoning

[00:29:38] 1.5 Intelligence vs. Skill in LLMs and Model Building

2. ARC Benchmark and Program Synthesis

[00:37:36] 2.1 Intelligence Definition and LLM Limitations

[00:41:33] 2.2 Meta-Learning System Architecture

[00:56:21] 2.3 Program Search and Occam's Razor

[00:59:42] 2.4 Developer-Aware Generalization

[01:06:49] 2.5 Task Generation and Benchmark Design

3. Cognitive Systems and Program Generation

[01:14:38] 3.1 System 1/2 Thinking Fundamentals

[01:22:17] 3.2 Program Synthesis and Combinatorial Challenges

[01:31:18] 3.3 Test-Time Fine-Tuning Strategies

[01:36:10] 3.4 Evaluation and Leakage Problems

[01:43:22] 3.5 ARC Implementation Approaches

4. Intelligence and Language Systems

[01:50:06] 4.1 Intelligence as Tool vs Agent

[01:53:53] 4.2 Cultural Knowledge Integration

[01:58:42] 4.3 Language and Abstraction Generation

[02:02:41] 4.4 Embodiment in Cognitive Systems

[02:09:02] 4.5 Language as Cognitive Operating System

5. Consciousness and AI Safety

[02:14:05] 5.1 Consciousness and Intelligence Relationship

[02:20:25] 5.2 Development of Machine Consciousness

[02:28:40] 5.3 Consciousness Prerequisites and Indicators

[02:36:36] 5.4 AGI Safety Considerations

[02:40:29] 5.5 AI Regulation Framework

  continue reading

238 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi

Keşfederken bu şovu dinleyin
Çal