This event has ended. Visit the official site or create your own event on Sched.

Welcome to the Official Schedule for RightsCon Toronto 2018. This year’s program, built by our global community, is our most ambitious one yet. Within the program, you will find 18 thematic tracks to help you navigate our 450+ sessions

Build your own customized RightsCon schedule by logging into Sched (or creating an account), and selecting the sessions that you wish to attend. Be sure to get your ticket to RightsCon first. You can visit rightscon.org for more information. 

To createIf you’ve created a profile with a picture and bio, please allow a few hours for the RightsCon team to merge it with your existing speaker profile. 

Last updated: Version 2.3 (Updated May 15, 2018).

Back To Schedule
Wednesday, May 16 • 14:30 - 15:45
Artificial Intelligence: Lethal Autonomous Weapons Systems and Peace Time Threats

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

We are on the verge of one of the greatest paradigm shifts in human history. Research on Artificial Intelligence is enabling humanity to create autonomous intelligent software agents that can currently perform and learn new tasks without human guidance, observation or intervention, supplanting humans in decision­ making processes. This is already the case for military weapons platforms know as Lethal Autonomous Weapons Systems (LAWS) that can kill and destroy a target without human intervention. There are however also a plethora of peace time uses and risks of autonomous agents including potential mass disinformation, criminal profiling and the potential management of population amid resource scarcity to name a few. The panel aims to address some of the following questions:
" What if fake news and internet trolls are generated by increasingly autonomous software?
" Would autonomous criminal profiling turn the presumption of Innocence upside-down?
" If code represents the law of cyberspace, and computer software potentially interferes with citizens' rights and integrity, shouldn't their use be regulated by a democratic process?
" The language of human vs. machine decision-making: are we blurring important distinctions?
" Do we have a moral duty not to create 'intelligent' systems that could potentially become a risk for humanity?

This session is organized by the ICT for Peace Foundation and the Zurich Hub for Ethics and Technology, Switzerland, as part of an on-going process of looking at AI, LAWS and Peace-time threats

avatar for David Kirkpatrick

David Kirkpatrick

Editor-in-chief, Techonomy Media
David Kirkpatrick is a journalist, author, and founder of Techonomy Media. Its conferences gather leaders to discuss how tech changes everything. Techonomy 2018 is Nov. 11-13, 2018 in Half Moon Bay, California. In 2016, Mark Zuckerberg made there his notorious remarks about fake news... Read More →

avatar for Todd Davies

Todd Davies

Academic Research and Program Officer, Stanford University
I am a social scientist whose work over the past 17 years has focused on the relationships between digital technologies, group deliberation, rights and freedoms, and democratic decision making. Previous work focused on machine learning and knowledge representation in artificial intelligence... Read More →
avatar for Kyle Dent

Kyle Dent

Research Area Manager, PARC
I am an AI researcher and data scientist studying the interplay between people and technology. I lead research and innovation projects and am interested in technology and society, intelligent conversational agents, and complex systems.
avatar for Maarten Van Horenbeeck

Maarten Van Horenbeeck

Board Member, FIRST.Org, Inc.
Maarten Van Horenbeeck is Board Member and former Chairman of the Forum of Incident Response and Security Teams (FIRST). He also works as Chief Information Security Officer for Zendesk. Prior to this, he managed the Threat Intelligence team at Amazon and worked on the Security teams... Read More →

Wednesday May 16, 2018 14:30 - 15:45 EDT