AI governance and standards
- Select
- English
Global insights
This ICC policy paper highlights how divergent AI regulations across countries can lead to fragmented global markets and increased business costs. ICC calls for greater coordination on the development of international, market-driven AI standards, to bridge legal differences, reduce compliance burdens, improve market access and enhance cross-border innovation.
Go directly to:
As AI systems become integral to business operations worldwide, fragmented governance approaches create significant challenges for companies of all sizes.
When different jurisdictions develop their own AI policies, laws and regulations, businesses face:
These challenges are particularly acute for small- and medium-sized enterprises (SMEs) which lack the resources to manage complex, jurisdiction-specific requirements.
International, market-driven standards are consensus-based guidelines that define how technologies should perform, interact and remain safe. They provide practical guidance that works across multiple legal frameworks, essentially creating a common language for AI governance globally.
Achieving internationally interoperable AI governance is significantly hindered by overlapping standardisation efforts, inconsistent terminology across different frameworks and limited awareness of existing AI standards.
These issues contribute to market fragmentation and a complex regulatory landscapes, with regional or national bodies – sometimes even within the same country – issuing overlapping or even competing guidance. At the same time, the use of standards processes to advance specific policy agendas rather than technical excellence, creates standards that may not serve broader global or business needs.
Without better coordination, these standardisation efforts risk adding complexity instead of reducing it, increasing compliance costs (which are especially burdensome for SMEs), and impeding cross-border collaboration and innovation.