Loading…
This event has ended. Visit the official site or create your own event on Sched.

Welcome to the Official Schedule for RightsCon Toronto 2018. This year’s program, built by our global community, is our most ambitious one yet. Within the program, you will find 18 thematic tracks to help you navigate our 450+ sessions

Build your own customized RightsCon schedule by logging into Sched (or creating an account), and selecting the sessions that you wish to attend. Be sure to get your ticket to RightsCon first. You can visit rightscon.org for more information. 

To createIf you’ve created a profile with a picture and bio, please allow a few hours for the RightsCon team to merge it with your existing speaker profile. 

Last updated: Version 2.3 (Updated May 15, 2018).

Wednesday, May 16 • 17:15 - 18:15
Overcoming the black box problem: Scrutinizing automated decisions in the real world

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Automated decisions increasingly mediate civic life. Governments use algorithms to screen immigrants and allocate social services. Corporations rely on software to help in making decisions in vital areas like hiring, credit scoring, and political discourse. There is a growing desire to “open the black box” of complex algorithms and hold the institutions using them accountable. But across the globe, civil society faces a range of challenges as they pursue these goals. Testing the societal impact of algorithms from the outside is one of the biggest challenges for advocacy organizations: A critical amount of data may be needed and privacy provisions must be accounted for. Analysis can be time-consuming and expensive. Additionally, the potential of reidentification and different data formats can make it difficult for researchers to share results and effectively collaborate. Nevertheless, many researchers, journalists, and public interest groups have been able to make real progress in understanding these systems.

In this session, we’ll explore what it means to scrutinize automated systems in the real world, using both technical and non-technical methods. We’ll look at case studies where public actors were able to discover meaningful insights about an automated decision — without having access to the underlying code. And we’ll get a flavor for the many different ways to "hold algorithms accountable." The moderator will first share an overview of findings from research about different methods of scrutiny — both technical and nontechnical — and invited session participants will then describe the work they each did to investigate an automated system. One case study presented will be AlgorithmWatch's “openSCHUFA” project, investigating Germany’s leading credit scoring company. A low SCHUFA score means banks will reject your credit card application, car rentals will reject you as a customer and network providers will say ‘no’ to a new Internet contract. No one knows how accurate SCHUFA’s data is and how it computes its scores, and OpenSCHUFA wants to change this by analyzing thousands of credit records. The project has successfully crowdfunded more than 43,000 Euros and almost 18,000 people have requested their score.

Session leaders will then encourage attendees to share their own work investigating algorithms to promote an understanding that there are many ways to gain more insight into automated decision systems without having access to or being able to understand code. Participants will then have the opportunity to brainstorm other systems worth scrutinizing and how to do so.

Moderators
avatar for Miranda Bogen

Miranda Bogen

Policy Analyst, Upturn
Miranda Bogen is a Policy Analyst at Upturn, where she focuses on the social implications of machine learning and artificial intelligence, and the effect of technology platforms on civil and human rights. She has coauthored reports on data ethics, governing automated decisions, and... Read More →

Speakers
avatar for Lorena Jaume-Palasi

Lorena Jaume-Palasi

Founder & Executive Director, AlgorithmWatch
Lorena is the executive director of AlgorithmWatch, a non-profit organisation to evaluate and shed light on algorithmic and automatization processes that have a social relevance. Her work focuses on philosophy of law and ethics of automatization and digitization. Lorena has been appointed... Read More →
avatar for Matthias Spielkamp

Matthias Spielkamp

Founder and Executive Director, AlgorithmWatch
Matthias Spielkamp is co-founder and executive director of AlgorithmWatch. He is co-founder and publisher of the online magazine iRights.info (Grimme Online Award 2006). He testified before several committees of the German Bundestag, i.e. on AI and robotics. Matthias serves on the governing board of the German section of Reporters Without Borders and the... Read More →
avatar for Leandro Ucciferri

Leandro Ucciferri

Policy Analyst, Asociación por los Derechos Civiles
I'm a lawyer, policy analyst and researcher working at the Association for Civil Rights (ADC in Spanish), a not-for-profit, independent, NGO founded in 1995, based in Buenos Aires, Argentina. My work focuses mostly on privacy issues. https://leandro.im


Wednesday May 16, 2018 17:15 - 18:15 EDT
201C