(Reuters) – Elon Musk and a group of artificial intelligence experts and industry leaders called in an open letter for a six-month pause in the development of systems more powerful than OpenAI’s recently launched GPT-4, citing potential threats to society and humanity. The letter, published by the non-profit Future of Life Institute and signed by more than 1,000 people, including Musk, called for a halt to the development of advanced artificial intelligence until common security protocols for such designs are developed, implemented and verified. independent experts. “Effective artificial intelligence systems should only be developed when we are sure that their impact is positive and their risks are manageable,” the letter reads. The letter detailed the potential risks to society and civilization of AI systems competing with humans in the form of economic and political disruption, and urged developers to work with policymakers, government and regulators. The signatories were Stabilitetti AI CEO Emad Mostaque, researchers from Alphabet’s DeepMind, and artificial intelligence heavyweights Yoshua Bengio and Stuart Russell. According to the European Union’s Transparency Register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as the powerful London-based altruistic group Founders Pledge and the Silicon Valley Community Foundation. Worryingly, EU police force Europol on Monday joined ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning of potential misuse of the system for phishing, disinformation and cybercrime. Meanwhile, the UK government has unveiled proposals for an “adaptive” regulatory framework for artificial intelligence. The government’s approach, outlined in a policy paper published on Wednesday, would divide responsibility for managing AI between human rights, health and safety and competition regulators, rather than creating a new body dedicated to the technology. Musk, whose automaker Tesla (NASDAQ: TSLA ) uses artificial intelligence in its Autopilot system, has voiced his concerns about AI. Since its release last year, Microsoft-backed OpenAI ChatGPT has encouraged competitors to accelerate the development of similar large-scale language models and companies to integrate generative AI models into their products. Sam Altman, CEO of OpenAI, did not sign the letter, a spokesperson for Future of Life told Reuters. OpenAI did not immediately respond to a request for comment. “The letter is not perfect, but the spirit is right: We need to slow down until we better understand the implications,” said New York University professor Gary Marcus, who signed the letter. “They can cause serious damage… the big players are increasingly secretive about what they do, making it harder for society to protect itself from potential harm.