In an open letter, Elon Musk and a group of artificial intelligence experts and industry leaders have called for a six-month halt on developing systems more powerful than OpenAI’s recently launched GPT-4, citing potential societal risks.
OpenAI, supported by Microsoft, revealed the fourth iteration of its GPT (Generative Pre-trained Transformer) AI programme earlier this month, which has wowed users by engaging them in human-like conversation, composing songs, and summarising lengthy documents.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.
According to the European Union’s transparency register, the non-profit is mainly funded by the Musk Foundation, as well as the London-based group Founders Pledge and the Silicon Valley Community Foundation.
“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI, and his car-maker Tesla uses AI for its autopilot system.
Musk, who has voiced dissatisfaction with regulator efforts to regulate the autopilot system, has asked for a regulatory authority to ensure that AI development benefits the public interest.
More than 1,000 individuals, including Musk, signed the letter. OpenAI CEO Sam Altman was not among those who signed the statement as well as Alphabet and Microsoft CEOs Sundar Pichai and Satya Nadella.
Concerns arise as ChatGPT draws the attention of lawmakers, who are concerned about its effect on national security and education.
Europol, the European Union’s police agency, issued a warning on Monday about the possible misuse of the system in phishing attempts, disinformation, and cybercrime.