Artwork

İçerik information labs and Information labs tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan information labs and Information labs veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

14:49
 
Paylaş
 

Manage episode 446180736 series 3480798
İçerik information labs and Information labs tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan information labs and Information labs veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro

💭 Q1-The ‘Intelligence Paradox’

🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.

💭 Q2-‘Conceptual Borrowing’

🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.

💭 Q3-Human vs AI ‘Learning’

🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.

📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
𝕏 https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ssrn.4738331
🌐 Jurgen Gravestein
https://www.linkedin.com/in/jurgen-gravestein

Jurgen Gravestein is a writer, conversation designer and AI consultant. He works at the CDI, the world’s leading training and certification institute in conversational AI. He also runs a successful Substack newsletter “Teaching computers how to talk”.

  continue reading

23 bölüm

Artwork
iconPaylaş
 
Manage episode 446180736 series 3480798
İçerik information labs and Information labs tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan information labs and Information labs veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro

💭 Q1-The ‘Intelligence Paradox’

🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.

💭 Q2-‘Conceptual Borrowing’

🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.

💭 Q3-Human vs AI ‘Learning’

🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.

📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
𝕏 https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ssrn.4738331
🌐 Jurgen Gravestein
https://www.linkedin.com/in/jurgen-gravestein

Jurgen Gravestein is a writer, conversation designer and AI consultant. He works at the CDI, the world’s leading training and certification institute in conversational AI. He also runs a successful Substack newsletter “Teaching computers how to talk”.

  continue reading

23 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi