Mitigating the risk of extinction from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war", the Center for AI Safety says. The San Francisco-based nonprofit released the warning in a statement overnight after convincing major industry players to come out publicly with their concerns that artificial intelligence is a potential existential threat to humanity. In just one sentence, the world's biggest creators of artificial intelligence are screaming in unison. The Center's Executive Director Dan Hendrycks told the ABC via email that they released the statement because the public "deserves to know what the...