Biography

Shahar Avin is a senior researcher associate at the Centre for the Study of Existential Risk (CSER). He works with CSER researchers and others in the global catastrophic risk community to identify and design risk prevention strategies, through organising workshops, building agent-based models, and by frequently asking naive questions.

Prior to CSER, Shahar worked at Google for a year as a mobile/web software engineer. His PhD was in philosophy of science, on the allocation of public funds to research projects. His undergrad was in physics and philosophy of science, which followed a mandatory service in the IDF. He has also worked at and with several startups over the years.

Interests

  • Existential Risk
  • Artificial Intelligence
  • Science Funding

Education

  • PhD in History and Philosophy of Science, 2014

    University of Cambridge

  • BA + MSci in Natural Sciences, 2010

    University of Cambridge

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). Computing Power and the Governance of Artificial Intelligence.

PDF

(2024). Lessons from COVID-19 for GCR governance: a research agenda. In F1000Research.

PDF

(2023). Frontier AI regulation: Managing emerging risks to public safety.

PDF

(2023). Model evaluation for extreme risks.

PDF

(2023). Strengthening Resilience to AI Risk. CETaS Briefing Paper.

PDF

(2023). Rewards, risks and responsible deployment of artificial intelligence in water systems. In Nature Water.

PDF

(2022). Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities. In Nature Machine Intelligence.

PDF

(2021). Filling gaps in trustworthy development of AI. In Science.

Preprint PDF DOI Science Gizmodo

Talks & Podcasts

UR22 - Perceptions of Risk & Harm from AI and Personal Data Use

Panel discussion on global AI risk perceptions

The Inside View podcast - Shahar Avin on Intelligence Rising, AI Governance

The Inside View is a podcast where people discuss their inside views about existential risk from AI.

When should you trust the developers of AI systems? | Trustworthy AI | AI for Good

Webinar on the paper Toward Trustworhy AI Development

בינה מלאכותית - השלכות, סיכונים והזדמנויות

סקירת השלכות בינה מלאכותית, בעיקר בתחומי בטחון, סייבר ומידע

Aleks Listens podcast - Existential Risk — AI, Pandemics, and Cyber Security

Aleks Listens is a podcast about philosophy, politics, race, and mental health

How can the spotlight on global fragility help promote resilience, peace and cooperation internationally?

Panel discussion about lessons from/during COVID-19.

Discontinuities, and what to do about them

Overview of strategical considerations

Projects

*

AI strategy card game (BAK)

A multiplayer card game that explore key tensions in AI strategy and race dynamics. Designed to be played with a regular card deck.

Intelligence Rising

A strategic AI futures role-play exercise. Aimed at current and future AI decision makers. Designed to be facilitated for groups (in person or online) by domain specialists. Under continuous development by Technology Strategy Roleplay.

Personal Website

The very website you’re on. Built with Hugo Academic.

Civilization V Superintelligence Mod

A mod that replaces the space victory with a science victory brought about by building an aligned artificial superintelligence. Adds the risk of developing an unaligned artificial superintelligence, which leads to human extinction. Developed by Shai Shapira.

Should You Phd?

A short quiz to help you figure out whether you should do a PhD.

Simulating Science Funding

Agent-based simulation of a community of investigators on an epistemic landscape with a central source of funding.

SimScience, The Science Simulator

A silly web game made to explore the concept of authority in scientific decisions.

Popular Topics