NonProphets (super)forecasting podcast - discussing AI

Abstract

We discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and “the singularity”; whether we can regulate or limit the development of AI; the prospect of an AI arms race; how AI could be used to be undermine political security; Open AI and the prospects for protective AI; tackling AI safety and control problems; why it matters what data is used to train AI; when will have self-driving cars; the potential benefits of AI; and why scientific research should be funded by lottery.

Date
Avatar
Shahar Avin
Senior Research Associate

My research focuses on risks at the interface of Silicon and Carbon (and, occasionally, Plutonium).