Talks & Podcasts

2022

The Inside View podcast - Shahar Avin on Intelligence Rising, AI Governance

The Inside View is a podcast where people discuss their inside views about existential risk from AI.

When should you trust the developers of AI systems? | Trustworthy AI | AI for Good

Webinar on the paper Toward Trustworhy AI Development

2021

בינה מלאכותית - השלכות, סיכונים והזדמנויות

סקירת השלכות בינה מלאכותית, בעיקר בתחומי בטחון, סייבר ומידע

2020

Aleks Listens podcast - Existential Risk — AI, Pandemics, and Cyber Security

Aleks Listens is a podcast about philosophy, politics, race, and mental health

How can the spotlight on global fragility help promote resilience, peace and cooperation internationally?

Panel discussion about lessons from/during COVID-19.

2019

Discontinuities, and what to do about them

Overview of strategical considerations

AI Risks, and How to Tell Them Apart

Classification of AI risks by time (now, near and far) and type (accident, malicious and systemic).

Science Funding: Theoretical perspectives

Ideal science, epistemic landscapes, competition models and the politics of science: Different theoretical lenses on science funding.

System 1 System 2 Thinking Cycles

Intro and background to AI scenario role-plays

2018

פודקאסט הכמוסה - מדברים על סוף העולם

Talking to Israel’s bestselling economics daily about the end of the world (In Hebrew).

Exploring artificial intelligence futures

High level sketch of possible AI futures and methods for exploring them.

The Cyberlaw podcast - discussing the malicious use of AI

Our interview features an excellent and mostly grounded exploration of how artificial intelligence could become a threat as a result of the cybersecurity arms race.

Science funding is a gamble so let's give out money by lottery

PhD findings highlights.

Future of Life Institute podcast - discussing AI safety and security

On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.

Panel Discussion - Security & Privacy

Have Security & Privacy Risks Become the Ultimate Obstacle for AI & its Rapid Growth?

Artificial Intelligence Risks

Overview of short and long term, accident and malicious use risks.

Robert Miles AI Safety - Superintelligence Mod for Civilization V

Rob and Shahar play the Civ V Superintelligence Mod

Existential Risk from Superintelligence in Sid Meier's Civilization V

Describing the Civ V Superintelligence Mod

2017

NonProphets (super)forecasting podcast - discussing AI

We discuss the prospects for the development of artificial intelligence.

Horror of the Apocalypse: Reality Bites

Comparing global catastrophic scenarios to horror movie apocalypses.

Artificial Intelligence: Between a tech utopia and human extinction

High level sketch of possible AI futures.