Artwork

İçerik Innovation For All and Sheana Ahlqvist tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Innovation For All and Sheana Ahlqvist veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

When bad data leads to social injustice, featuring David Robinson

1:05:25
 
Paylaş
 

Manage episode 321046468 series 2923153
İçerik Innovation For All and Sheana Ahlqvist tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Innovation For All and Sheana Ahlqvist veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Can AI really change the world? Or are its developing algorithms formalizing social injustice? When these highly-technical systems derive patterns from existing datasets, their models can perpetuate past mistakes.

In this episode of the Innovation For All Podcast, Sheana Ahlqvist discusses with David Robinson the threats of social bias and discrimination becoming embedded in Artificial Intelligence.

IN THIS EPISODE YOU’LL LEARN:

  • What is the role of technological advances in shaping society?
  • What is the difference between Machine Learning vs. Artificial Intelligence?
  • Social Justice Implications of Technology
  • What are the limitations of finding patterns in previous data?
  • How does should government regulate new, highly technical systems?
  • The need for more resources and more thoughtfulness in regulating data
  • Examples of data-driven issues in the private sector.
  • Removing skepticism of regulatory agencies in examining data models.
  • Authorities should remember that there are limits to what AI models can do.

David is the co-founder of Upturn and currently a Visiting Scientist at the AI Policy and Practice Initiative in Cornell’s College of Computing and Information Science. David touches on how government regulatory agencies should examine new AI models and systems, especially as the technology continues to creep its way into our day-to-day lives. David discusses the importance of “ground truthing.” David emphasizes looking at a technology’s capabilities and limits before deciding on whether decision makers should implement it.

Get shownotes for this an every episode at innovationforallcast.com or find us on Twitter @inforallpodcast.

Original air date: 12/26/18


Send in a voice message: https://anchor.fm/innovation-for-all/message
Support this podcast: https://anchor.fm/innovation-for-all/support

  continue reading

68 bölüm

Artwork
iconPaylaş
 
Manage episode 321046468 series 2923153
İçerik Innovation For All and Sheana Ahlqvist tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Innovation For All and Sheana Ahlqvist veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

Can AI really change the world? Or are its developing algorithms formalizing social injustice? When these highly-technical systems derive patterns from existing datasets, their models can perpetuate past mistakes.

In this episode of the Innovation For All Podcast, Sheana Ahlqvist discusses with David Robinson the threats of social bias and discrimination becoming embedded in Artificial Intelligence.

IN THIS EPISODE YOU’LL LEARN:

  • What is the role of technological advances in shaping society?
  • What is the difference between Machine Learning vs. Artificial Intelligence?
  • Social Justice Implications of Technology
  • What are the limitations of finding patterns in previous data?
  • How does should government regulate new, highly technical systems?
  • The need for more resources and more thoughtfulness in regulating data
  • Examples of data-driven issues in the private sector.
  • Removing skepticism of regulatory agencies in examining data models.
  • Authorities should remember that there are limits to what AI models can do.

David is the co-founder of Upturn and currently a Visiting Scientist at the AI Policy and Practice Initiative in Cornell’s College of Computing and Information Science. David touches on how government regulatory agencies should examine new AI models and systems, especially as the technology continues to creep its way into our day-to-day lives. David discusses the importance of “ground truthing.” David emphasizes looking at a technology’s capabilities and limits before deciding on whether decision makers should implement it.

Get shownotes for this an every episode at innovationforallcast.com or find us on Twitter @inforallpodcast.

Original air date: 12/26/18


Send in a voice message: https://anchor.fm/innovation-for-all/message
Support this podcast: https://anchor.fm/innovation-for-all/support

  continue reading

68 bölüm

Tüm bölümler

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi