A UK government official said Tuesday that the final declaration at an international Artificial Intelligence (AI) summit in Paris did not address “national security” issues, following Britain’s refusal to sign the communiqué.
Prioritising the regulation of AI technology to make it “open” and “ethical” was agreed upon in the closing agreement at the Paris meeting of world leaders.
The two countries that have two of the three biggest AI sectors in the world, the United States and Britain, did not sign the deal. Prime Minister Keir Starmer already pushed the UK to set its path in terms of AI regulation.
“Harder questions around national security” were not adequately addressed in the declaration, according to a UK government spokesperson.

“We felt the declaration didn’t provide enough practical clarity on global governance nor sufficiently address harder questions around national security and the challenge AI poses to it,” said a spokesperson.
The spokesperson did, however, add that the UK will “continue to work closely with our international partners” and agree with “much” of the declaration.
Dozens of signatories, including co-hosts France and India, Germany, and China, demanded that AI be “open, inclusive, transparent, ethical, safe, secure, and trustworthy” within “international frameworks.”
The agreement that AI should be “sustainable for people and the planet” was not supported by major industry leaders like Sam Altman’s OpenAI.
“You’d only ever expect us to sign up to initiatives that we judge to be in our national interest,” a representative from 10 Downing Street told reporters on Tuesday in London.
Last month, Starmer announced intentions to allow companies to test their discoveries in the UK before regulating the technology, pledging to make the UK a “world leader” in artificial intelligence.
Starmer stated that the government would regulate it in a manner “that we think is best for the UK” and that the UK would be “pro-growth and pro-innovation on regulation.”