
ISTANBUL
As leaders, technology executives, and policymakers from around the world met in Paris for the AI Action Summit this week, expectations were high for a productive gathering.
High on the agenda were issues regarding governance of artificial intelligence and the ethical, political, and economic challenges it presents – key concerns that people across the board agree require collective action.
Yet, when the dust settled, the biggest headline out of Paris turned out to be the US and UK refusing to sign a landmark declaration aimed at fostering ethical, inclusive, transparent, and sustainable AI development.
A total of 61 nations and organizations, including France, India, China, Indonesia, Italy, UAE, EU, and the African Union, reached a consensus on the ‘Inclusive and Sustainable Artificial Intelligence for People and the Planet’ agreement.
The pact underscores the importance of international cooperation to ensure AI remains open, ethical, and accessible, while preventing monopolization by a handful of powerful entities.
According to an official statement, the agreement is structured around six key objectives – promoting AI accessibility to reduce digital divides; ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy; making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development; encouraging AI deployment that positively shapes the future of work and labor markets and delivers opportunity for sustainable growth; making AI sustainable for people and the planet; and reinforcing international cooperation to improve global AI governance and policy coordination.
The declaration also highlighted the growing impact of AI on labor markets and emphasized the need for proactive measures to mitigate job displacement, while maximizing AI’s potential for economic advancement.
Why did the US and UK refuse?
The two major Western powers declined to sign the AI governance pact, citing concerns over national security, regulatory burdens, and sovereignty in AI policymaking.
A UK government spokesperson told The Guardian that the statement “had not gone far enough in addressing global governance of AI and the technology’s impact on national security.”
According to the BBC, the UK government had doubts about the framework for global governance, which was the main reason for its refusal.
The UK, the report said, supported much of the declaration’s content, but felt the pact lacked practical clarity on key issues, especially regarding how AI would be governed globally.
The BBC also reported that Britain’s hesitation stemmed from broader concerns about ceding control over AI regulation to a multilateral framework, instead favoring an independent approach aligned with its own policies.
The decision drew criticism from some analysts, who pointed out that the UK was the first country to host an AI safety summit in 2023, positioning itself as a leader in global AI regulation.
The US, represented by Vice President JD Vance, took an even more critical stance, rejecting the agreement due to concerns that excessive AI regulation could stifle technological progress.
In his address at the summit, Vance warned that excessive regulation of AI “could kill a transformative industry just as it’s taking off.”
His comments emphasized a belief in pro-business policies that prioritize industry growth and innovation over restrictive governance frameworks.
The US has historically been wary of multilateral AI regulations, preferring a market-driven approach where private sector innovation leads technological advancements.
Vance’s remarks echoed the stance of the previous Trump administration, which favored deregulation in tech industries.
Despite their alignment in rejecting the agreement, the UK government maintained that its decision was independent and not influenced by the US.
“This isn’t about the US; this is about our own national interest, ensuring the balance between opportunity and security,” a UK spokesperson told the BBC.
Reactions
China has warned against drawing “ideological lines” in AI development.
Beijing “opposes drawing ideological lines, the generalization of national security concept, and the politicization of economic and technological issues,” Foreign Ministry spokesman Guo Jiakun said at a press conference on Wednesday.
His statement was in reaction to Vance’s claims in his address in Paris that “some authoritarian regimes have stolen and used AI to strengthen their military intelligence and surveillance capabilities, capture foreign data, and create propaganda to undermine other nations’ national security.”
Guo said China has “repeatedly emphasized its commitment to embracing intelligent transformation, vigorously promoting AI innovation, valuing AI security, and supporting enterprises in independent innovation.”
Another take on the issue came from Dario Amodei, head of Anthropic, the San Francisco-based OpenAI competitor, who described the summit as a “missed opportunity.”
“The need for democracies to keep the lead, the risks of AI, and the economic transitions that are fast approaching – these should all be central features of the next summit,” Amodei said in a written statement.
“At the next international summit, we should not repeat this missed opportunity … The advance of AI presents major new global challenges. We must move faster and with greater clarity to confront them.”
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.