The European Commission described this approach as “future-proof,” which proved to be predictably arrogant, as new AI systems have already thrown the bill’s clean definitions into chaos. Focusing on use cases is fine for narrow systems designed for a specific use, but it’s a category error when it’s applied to generalised systems. Models such as GPT-4 don’t do any one thing except predict the next word in a sequence. You can use them to write code, pass the bar exam, draw up contracts, create political campaigns, plot market strategy, and power AI companions or sexbots. In trying to regulate systems...