Top artificial intelligence executives including OpenAI CEO Sam Altman on Tuesday joined other experts and professors in urging policymakers to see the technology as one of the most serious risks to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google. Also among them were Geoffrey Hinton and Yoshua Bengio...