As global leaders converge in Paris for a summit on artificial intelligence (AI), experts have called for stronger regulation to prevent the technology from slipping beyond human control.
The summit, co-hosted by France and India, focused on AI ‘action’ in 2025, placing less emphasis on the safety concerns that dominated previous meetings in Bletchley Park, UK, in 2023, and Seoul, South Korea, in 2024.
France’s approach to the summit is aimed at encouraging global cooperation on AI governance, with an emphasis on sustainability and voluntary commitments rather than binding regulations.
However, Max Tegmark, head of the US-based Future of Life Institute, which has long warned about AI’s potential dangers, urged France not to miss the chance to take action.
Tegmark’s institute launched the Global Risk and AI Safety Preparedness (GRASP) platform, designed to map the major risks associated with AI and the solutions being developed worldwide.

“We’ve identified around 300 tools and technologies to address these risks,” said GRASP coordinator Cyrus Hodes.
The platform’s results will be shared with the OECD and the Global Partnership on Artificial Intelligence (GPAI), an international coalition of nearly 30 nations, including major European powers, Japan, South Korea, and the United States.
The summit also saw the release of the first International AI Safety Report, compiled by 96 experts and supported by 30 countries, the UN, EU, and OECD.
The report highlights risks ranging from fake content online to far more concerning threats, such as biological or cyberattacks.
Yoshua Bengio, the report’s coordinator and a noted computer scientist, warned of the potential for humans to lose control over AI systems, driven by the systems’ “own will to survive.”
AGI refers to AI that could match or surpass human intelligence across all fields. OpenAI’s Sam Altman has predicted that AGI could be developed as soon as 2026 or 2027.
Stuart Russell, a computer science professor at the University of California, Berkeley, expressed concerns about the potential for AI-powered weapons systems that could independently decide when and who to target.
Russell, who coordinates the International Association for Safe and Ethical AI (IASEI), stressed that governments must take the lead in establishing safeguards for AI-powered weaponry.
Tegmark argued that the solution is simple: AI should be regulated like any other high-risk industry.