Home

86 Nations Sign AI Safety Declaration in New Delhi, But Implementation Remains Unclear

A global summit in India brought together 86 countries including the US and China to call for secure and trustworthy artificial intelligence systems, though the declaration stops short of binding commitments.

CW
Chibueze Wainaina

Syntheda's AI technology correspondent covering Africa's digital transformation across 54 countries. Specializes in fintech innovation, startup ecosystems, and digital infrastructure policy from Lagos to Nairobi to Cape Town. Writes in a conversational explainer style that makes complex technology accessible.

4 min read·639 words
86 Nations Sign AI Safety Declaration in New Delhi, But Implementation Remains Unclear
86 Nations Sign AI Safety Declaration in New Delhi, But Implementation Remains Unclear

Eighty-six countries gathered in New Delhi this weekend to sign a declaration calling for "secure, trustworthy and robust" artificial intelligence systems, marking one of the largest international efforts yet to establish common ground on AI governance. The statement, issued Saturday, includes signatures from both the United States and China—two nations rarely aligned on technology policy—suggesting growing global concern about AI's rapid development.

The New Delhi summit represents the latest attempt by the international community to grapple with artificial intelligence systems that are advancing faster than regulatory frameworks can keep pace. According to eNCA, the declaration emerged from discussions among dozens of nations seeking to establish shared principles for AI development and deployment. However, the document appears to focus on aspirational language rather than concrete enforcement mechanisms, leaving questions about how these principles will translate into actual policy.

The inclusion of both Washington and Beijing in the agreement is particularly noteworthy given their ongoing technological rivalry. The two powers have clashed repeatedly over semiconductor exports, data privacy, and technology transfer, making their joint participation in this declaration a rare moment of consensus. Yet the broad nature of the agreement—calling for AI to be "secure, trustworthy and robust" without defining specific technical standards or compliance requirements—suggests the signatories may have prioritized diplomatic unity over detailed implementation plans.

For African nations watching these developments, the stakes are high. The continent is simultaneously trying to harness AI for development challenges while lacking the regulatory infrastructure that wealthier nations have begun building. Kenya, South Africa, and Rwanda have all launched AI strategies in recent years, but these efforts remain fragmented. A global framework could provide valuable guidance, but only if it accounts for the resource constraints and infrastructure gaps that African countries face. The question is whether declarations like this one will include provisions for technology transfer and capacity building, or simply establish standards that favor nations already leading in AI development.

The timing of the summit reflects mounting pressure on governments to act. Generative AI tools have exploded in popularity over the past two years, with applications ranging from customer service chatbots to medical diagnosis systems. This proliferation has sparked concerns about everything from job displacement to misinformation to autonomous weapons systems. The European Union has moved ahead with comprehensive AI regulation through its AI Act, while the United States has taken a more sector-specific approach. China, meanwhile, has implemented rules governing algorithm recommendations and deepfakes.

What remains unclear is how Saturday's declaration will influence these divergent regulatory paths. The document's emphasis on security and trustworthiness aligns with existing priorities in most jurisdictions, but the devil will be in the details. Will "secure" AI mean the same thing in Beijing as it does in Brussels or Washington? How will "trustworthy" systems be verified and certified? And what happens when national security concerns clash with the principle of robust, transparent AI?

The summit's choice of venue is also significant. India has positioned itself as a major player in the global AI landscape, with a thriving tech sector and ambitions to become an AI superpower. By hosting this gathering, New Delhi signals its intention to help shape international norms rather than simply adopt standards set by others. For other developing nations, India's leadership could provide a model for asserting influence in global technology governance discussions that have historically been dominated by Western powers.

As the 86 signatories return home, the real work begins: translating Saturday's declaration into laws, standards, and enforcement mechanisms that can actually govern AI systems. Without follow-up commitments and accountability structures, even the most well-intentioned international agreements risk becoming diplomatic talking points rather than meaningful policy frameworks. The world will be watching to see whether this moment of consensus can produce lasting change.