Leading AI developers agreed to work with governments to test new frontier models before they are released to help manage the risks of the rapidly developing technology, in a "landmark achievement" concluding the UK's artificial intelligence summit. Some tech and political leaders have warned that AI poses huge risks if not controlled, ranging from eroding consumer privacy to danger to humans and causing a global catastrophe, and these concerns have sparked a race by governments and institutions to design safeguards and regulation. At an inaugural AI Safety Summit at Bletchley Park, home of Britain's World War Two code-breakers, political leaders...