AI

Overview of strategical considerations

AI Risks, and How to Tell Them Apart

Classification of AI risks by time (now, near and far) and type (accident, malicious and systemic).

Autonomy and machine learning at the interface of nuclear weapons, computers and people

Describes new threats posed by the introduction of ML at the interfaces between people and nuclear weapons systems (and related systems), and proposes policy responses.

System 1 System 2 Thinking Cycles

Intro and background to AI scenario role-plays

Surveying Safety-relevant AI Characteristics

Survey of known and potential safety-relevant AI characteristics, covering internal characteristics, system-environment effects, and training.

Exploring artificial intelligence futures

Reviews different methods to explore AI futures, including fiction, single-discipline studies, several multidisciplinary approches, and interactive methods.

Exploring artificial intelligence futures

High level sketch of possible AI futures and methods for exploring them.

The Cyberlaw podcast - discussing the malicious use of AI

Our interview features an excellent and mostly grounded exploration of how artificial intelligence could become a threat as a result of the cybersecurity arms race.

Beyond Brain Size: Uncovering the Neural Correlates of Behavioral and Cognitive Specialization

Limits of brain size measures in the study of brain-behavior across species, and suggestions for alternative research approaches.

Future of Life Institute podcast - discussing AI safety and security

On this month’s podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from the Center for the Study of Existential Risk (CSER). They talk about CSER’s recent report on forecasting, preventing, and mitigating the malicious uses of AI, along with the many efforts to ensure safe and beneficial AI.