Artwork

İçerik Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.
Player FM - Podcast Uygulaması
Player FM uygulamasıyla çevrimdışı Player FM !

AWS, the Alignment problem and regulation - Brendan Walker-Munro and Sam Hartridge

47:08
 
Paylaş
 

Manage episode 373073688 series 2811139
İçerik Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.
The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):

'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ]

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.

... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.

Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine.

Additional Resources:

  continue reading

89 bölüm

Artwork
iconPaylaş
 
Manage episode 373073688 series 2811139
İçerik Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law tarafından sağlanmıştır. Bölümler, grafikler ve podcast açıklamaları dahil tüm podcast içeriği doğrudan Asia-Pacific Institute for Law and Security and Asia-Pacific Institute for Law veya podcast platform ortağı tarafından yüklenir ve sağlanır. Birinin telif hakkıyla korunan çalışmanızı izniniz olmadan kullandığını düşünüyorsanız burada https://tr.player.fm/legal özetlenen süreci takip edebilirsiniz.

In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.
The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):

'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ]

Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.

... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.

Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine.

Additional Resources:

  continue reading

89 bölüm

Semua episod

×
 
Loading …

Player FM'e Hoş Geldiniz!

Player FM şu anda sizin için internetteki yüksek kalitedeki podcast'leri arıyor. En iyi podcast uygulaması ve Android, iPhone ve internet üzerinde çalışıyor. Aboneliklerinizi cihazlar arasında eş zamanlamak için üye olun.

 

Hızlı referans rehberi