Player FM uygulamasıyla çevrimdışı Player FM !
Shannon Vallor, "The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking" (Oxford UP, 2024)
Manage episode 428130594 series 2421450
There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
385 bölüm
Manage episode 428130594 series 2421450
There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
385 bölüm
Semua episod
×Player FM'e Hoş Geldiniz!
Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.