- Competitive markets
- ICTs/Internet
Digital Economy
Overarching Narrative on Artificial Intelligence
ICC has released a four-pillar narrative on business considerations for the trustworthy, responsible and ethical development of artificial intelligence (AI). In addition, real-life examples illustrate the effectiveness of voluntary business approaches in responding to existing AI guidelines and addressing emerging policy challenges.
Artificial intelligence is revolutionising global industries by augmenting human abilities in areas such as language processing, generating creative content, predictive analytics and analytical reasoning, as well as learning and decision-making.
As AI continues to shape economies and societies, a robust governance model becomes essential to harness its benefits while mitigating risks.
ICC outlines the four pillars of global AI governance from the perspective of global business:
- Principles and codes of conduct
- Regulation
- Technical standards
- Industry self-regulation
Each pillar plays a crucial role in fostering trustworthy, responsible and ethical AI development.
We show, how by adhering to these frameworks, businesses drive innovation, ensure compliance, and build trust, contributing to sustainable and equitable growth.
What is artificial intelligence?
Artificial intelligence is a technology that enables the simulation or extension of human intelligence in machines, allowing them to perform tasks commonly associated with human intelligence, such as speech recognition, content creation, problem-solving, learning, and decision-making, with the potential to boost productivity and augment creativity.
While the term “artificial intelligence” has gained popularity in recent years, we must keep in mind that AI is a broad and diverse field, encompassing various subfields and approaches, such as machine learning, neural networks, natural language processing, and robotics, among others.
In the interest of global convergence on terminology, ICC recommends using the common definition of an AI system agreed at the OECD, which is
“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
The transformative impact of these technologies permeates every facet of modern life, reshaping economies, industries, and societies on a global scale, and it is important to leverage AI to help achieve the UN Sustainable Development Goals (SDGs).
From the automation of routine tasks to the development of sophisticated algorithms capable of complex decision-making or creation of new content like music, video, text, audio or images, AI has emerged as a cornerstone of innovation.
Its ability to learn from vast amounts of data, identify patterns, and accelerate the generation of insights has revolutionised industries ranging from healthcare and finance to manufacturing and transportation.
The ability to discover new insights in large data sets will drive new frontiers in science, and is being leveraged to develop new treatments and medicines, as well as to help doctors and nurses improve patient care.
AI can be a powerful accelerant for the scale and pace of sustainability solutions needed to address the climate crisis, for example, by helping to integrate new sources of renewable energy onto the grid, optimising energy and water consumption, anticipating hazardous weather events, and speeding up the discovery of low carbon building materials.
Moreover, AI technologies have the potential to broaden personalised access to information and resources, bridging the digital divide and empowering individuals and communities worldwide.
From online education platforms providing access to quality learning resources to AI-powered language translation tools breaking down language barriers, AI has accelerated the spread of knowledge and opportunities.
Yet, amidst the promise of AI-driven innovation, the potential risks and challenges associated with its widespread adoption should be recognised. AI design, development, deployment, and use pose challenges, which often surround the relation between technology and humans and intersect with various socioeconomic dimensions.
Its impact on people’s rights as well as considerations on accountability, transparency, safety, competition, sustainability, and inclusion should be taken into account. These risks, if left unaddressed, can impede innovation and progress, undermining the trust necessary for the adoption and use of AI technologies.
Recent advances, and the overwhelming popularity of user-friendly generative AI, have exponentially amplified its power to spur both beneficial and harmful change.
It is against this backdrop of immense promise and potential challenges that the imperative for robust AI governance emerges.
Businesses are at the forefront of AI development and deployment.
What are the four pillars of global artificial intelligence governance?
As AI continues to evolve, it is essential to strike a balance between realising its full potential for socioeconomic development, while ensuring that it aligns with globally shared values and principles that foster
- equality,
- transparency,
- accountability,
- fairness,
- reliability,
- privacy
- and a human-centric approach.
Over the past decade, this has created an increasingly complex, multi-layered policy environment and a proliferation of policy and regulatory approaches which are sometimes duplicative. These different approaches are gaining rapid momentum, as the technology continues to speed ahead.
The current global governance model for AI is based on four pillars:
- Principles and guidelines
- Regulation
- Technical standards
- Self-regulation
They are illustrated by industry best practices for responsible AI.
Guiding principles for responsible AI development, deployment and use provide a baseline framework for ethical and sustainable governance.
The OECD’s 2019 Principles on Trustworthy AI, revised in 2024, and endorsed by 47 countries, exemplify these efforts and emphasise cooperation “within and across jurisdictions to promote interoperable governance and policy environments”.
Similarly, the UN General Assembly’s 2024 resolution and UNESCO’s 2021 Recommendations on the Ethics of AI underscore the importance of human rights and ethical standards in AI at a global level.
Additionally, the G7 Hiroshima AI Process and the G20 Leaders Declaration reinforce these values by aligning their frameworks with the OECD AI Principles, emphasising responsible AI to achieve the SDGs.
A growing number of plurilateral initiatives such as the AI Safety Summits or the Joint AI Roadmap of the US-EU Trade and Technology Council, national approaches such as the US White House Executive Order on AI, the UK’s AI Principles, Australia’s AI Action Plan, the PRC’s Position Paper on Strengthening Ethical Governance of Artificial Intelligence or Singapore’s Model AI Governance Framework add to the AI guidelines and principles landscape.
Globally agreed principles and guidelines for responsible AI are necessary to provide a comprehensive framework for ethical and sustainable AI governance, avoiding fragmented and duplicative AI governance solutions and spanning multilateral and regional approaches.
By adhering to these principles, governments, organisations, and stakeholders can foster trust, promote innovation, and harness the transformative potential of AI for the benefit of society.
Recent developments in Europe, in particular the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law and the EU’s AI Act, represent a significant milestone in the governance and regulation of AI. The EU’s approach, grounded in ethics and human rights, sets a precedent for global AI regulation based on a risk-based classification system.
National efforts are also underway in several countries to devise and implement AI regulatory frameworks, aiming to promote responsible AI development, deployment and use, and strike a balance between boosting investment and innovation while protecting citizens from high-risk systems. Such initiatives include, among others, Brazil’s proposed AI Bill, Canada’s AI and Data Act, China’s Scientific and Technological Ethics Regulation, India’s proposed Digital India Act, South Korea’s AI Act, or the United Arab Emirates’ Council for AI and Blockchain.
At the same time, other countries opt against introducing AI regulation, aiming to set instead enabling policy environments built on principles-based guidance and industry self-regulation and rely on existing regulations and sectoral laws to set guardrails for AI systems. Such efforts include, for example, Singapore’s AI Governance Framework, Japan’s vision for “agile governance”, or the UK’s proposed “context-based, proportionate approach” to AI governance.
International standards play a pivotal role in ensuring consistency in the practical implementation of global, regional, and national AI policies and laws.
For instance, numerous upcoming AI regulations require AI system providers to put in place a risk management system.
Organisations such as the US NIST Risk Management Framework or ISO/IEC, CEN-CENELEC, and ITU are actively developing technical standards to advance consistency in the ways in which impact assessments are conducted. For instance, the ISO/IEC 42001 seeks to provide an overarching framework for AI system management. Additionally, ISO/IEC DIS 42005 is in development, detailing the procedures an organisation should follow when conducting impact assessments.
Although approaches to detailed requirements such as risk assessment and management may vary across organisations, adopting voluntary consensus-based standards (for example, the extensive work of ISO/IEC JTC 1 SC42, including ISO/IEC 42001, ISO/IEC 23894, ISO/IEC 42005, ISO/IEC 38507) can serve as a solid foundation for managing AI risks throughout the AI system’s lifecycle, and ensure an internationally consistent approach to implementation of AI laws.
Self-regulation within the AI industry, illustrated by best practices for responsible AI, is a crucial component of the global governance model.
Companies are increasingly committing to ethical AI practices, developing internal policies, and participating in international initiatives to ensure the safe and beneficial deployment of AI technologies. The AI Safety Summits, for instance, have seen commitments from companies to publish frontier AI safety policies and deepen collaboration with governments.
Given the voluntary nature of these commitments, continued dialogue between companies and governments throughout the implementation process, as well as transparency on progress, will be important towards building public trust of such measures.
Industry-led efforts, to align with technical standards, demonstrate a proactive approach to risk management and ethical impact assessments. These self-regulatory measures complement formal regulations and standards, fostering an environment where AI development is both innovative and aligned with societal values.
See the section below on industry best practices to learn more about what businesses do to ensure the design, development, deployment and use of trustworthy AI systems.
What are key business considerations on the global governance of artificial intelligence?
Effective governance of AI requires international cooperation. A cohesive framework for such cooperation should prioritise convergence on governance standards to prevent fragmentation of the policy landscape. There needs to be an international interoperable approach that will enable industry standards, domestic regulation, and global governance to come together and reinforce one another.
Policy frameworks must be rooted in democratic principles and designed to anticipate and address potential risks and challenges.
A risk-based regulatory approach that differentiates between high- and low-risk scenarios provides focus and protection against harm where it is most needed, while ensuring that regulations are not overly prescriptive and do not hamper innovation. For high-risk AI systems there should be a requirement for developers and deployers to put in place measures such as a risk management system, human oversight, data governance and security, technical documentation, record keeping and transparency.
Additionally, policy frameworks should recognise the diverse roles and responsibilities of stakeholders throughout the AI lifecycle, from development to deployment and beyond. Related to this, laws should reflect the relevant layers of the AI technology stack and distribute responsibility across the value chain appropriately.
There is a need for international collaboration to monitor for, and respond to, globally significant safety and security risks, building on the work begun by the November 2023 UK Safety Summit and continued through the May 2024 Seoul AI Summit.
We remain convinced that to be effectively implemented, governance frameworks need stakeholder input and buy-in that comes from grassroots, bottom-up approaches to ensure meaningful AI policy that supports responsible innovation and not unduly hampers it.
As businesses are at the forefront of AI development and deployment, their partnership is vital:
Business engagement ensures that AI technologies are designed, deployed and utilised in ways that align with ethical considerations, human rights, and the welfare of society.
Business expertise is necessary to continuously shape implementation methods and help address practical challenges faced by organisations.
Business support reinforces accountability of AI systems, fostering trust among stakeholders, including consumers, companies and governments, who rely on businesses to act in the best interests of society.
Strong and continued business involvement and support enables widespread adoption and harmonisation of responsible AI practices globally and the establishment of consistent standards, avoiding fragmented regulatory environments and promoting a shared vision on trustworthy AI.
This is why ICC continues to engage in ongoing key multilateral policy discussions to guard against the risks of excessive policy fragmentation.
What are the critical policy areas to look out for in artificial intelligence?
Critical policy areas requiring attention include data governance, safety and security, inclusion and inclusive access, environmental sustainability, competition, mis- and disinformation, intellectual property, capacity building, skilling, and education and workforce adaptations.
ICC stands poised to contribute expertise and resources to the development of robust policies in these domains. Through ongoing dialogue and collaboration ICC aims to identify emerging challenges and opportunities in the AI landscape, updating policy priorities on a continuous basis to ensure relevance and effectiveness.
How do industry best practices and multistakeholder initiatives contribute to trustworthy artificial intelligence?
Industry and multistakeholder initiatives on AI policy and governance serve as valuable learning tools for policymakers and fellow industry stakeholders alike. These examples demonstrate the effectiveness of various approaches to addressing ethical, legal, and societal implications of AI technologies. By maintaining a dynamic repository of best practices, ICC fosters knowledge-sharing and encourages the adoption of responsible AI practices across diverse contexts and industries.
Case studies – Implementing artificial intelligence principles
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Making AI-driven decisions that are fair and free of any harmful bias | Africa and Europe | Vodafone Group | Telecommunications and specialised technologies | Vodafone takes its responsibilities seriously in the deployment of AI and has instituted the Vodafone Responsible AI Program to meet this challenge. The programme is a cross-departmental company-wide AI governance programme, which operates in close collaboration with other stakeholders and overseeing that the AI systems and products, which are either developed or sourced, are ethical, fair, accountable, and trustworthy, minimising risks to individuals and the society at large. The Vodafone Responsible AI Program is founded on an operational governance model of the privacy programme. The Vodafone Responsible AI Program covers the following principles: 1. Transparency and explainability 2. Accountability and responsibility 3. Ethics and fairness 4. Preservation of privacy and security 5. Human rights, diversity, and inclusivity 6. Maximising the benefits of AI while managing the disruption of its implementation |
Commitments to traceability and sustainability for the design, development and use of AI | Europe | Telefónica | Telecommunications and specialised technologies | In addition to long-standing and consolidated privacy governance, Telefónica began its journey towards AI governance in 2018 by committing to ethical principles for AI, updated in 2024, to ensure its positive impact on society and its application in the design, development and use of the company’s products and services. The updated principles also include commitments on AI traceability and sustainability. These ethical principles led to the creation of the first procedures and compliance requirements, for responsible AI. In 2022, a responsible AI governance pilot was rolled out, testing new roles and methods for evaluating products and services, and initiating training and awareness raising. This entire process allowed the approval of an internal AI governance model in December 2023, which commits Telefónica companies and their employees to a governance model that takes advantage of the lessons learned along the way: extensive reach of governance within the business area that develops, acquires, uses or markets the AI system, taking responsibility for categorising and implementing requirements, with clear roles; strong coordination mechanisms; clear risk orientation; improved decision-making by involving ethics experts where necessary. |
Responsible AI Standard | Oceania; Northern America; Middle East and North Africa; Latin America and the Caribbean; Europe; Asia; Africa; | Microsoft | Information technology | Microsoft’s Responsible AI Standard is a comprehensive framework that guides teams within the company on how to build and use AI responsibly. The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, the Responsible AI Standard defines what these values mean and how to achieve Microsoft’s goals in practice. The Responsible AI Standard also provide tools, processes, and best practices to help Microsoft teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the Responsible AI Standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings. |
Orange data and AI ethical council with external experts | Europe; Africa; Middle East and North Africa; | Orange | Telecommunications and specialised technologies | Orange advocates and acts to ensure that the development and deployment of AI systems are ethical and responsible. To this end, Orange has set up a comprehensive AI governance framework, with an ethical charter for data and AI guidelines, a network of ethical AI officers across its footprint, a task force of legal, technical and compliance experts, as well as tools and trainings for its employees. At the heart of this governance is Orange’s data and AI ethical council, created in 2021. It is an advisory and independent body whose mission is to define an ethical framework for AI and data, outside regulatory obligations, in line with Orange’s values and purpose. It issues advisory opinions to the Executive Committee on governance and on concrete use cases. It monitors the company’s implementation of the ethical principles validated by the Executive Committee, as defined in the ethical charter for data and AI. It is made up of 11 external figures, chosen for their independence, neutrality and expertise in these areas, as well as for the diversity of their profiles. Each member is committed to providing the Group with their international expertise and independent standpoint to advise on ethical issues relating to our AI-based and data processing systems |
Labels and third-party evaluation | Europe | Orange | Telecommunications and specialised technologies | Orange has started its responsible AI journey several years ago, convinced that, in order to prove their maturity, they need to be challenged by third parties. Orange has engaged in different labels in the domain of responsible AI, to participate in the effort to define an evaluation methodology and to experience external audits. In 2021 Orange was the first company to obtain the GEES-AI label, rewarding their actions to promote diversity and avoid the risks of discrimination in their AI systems. It was renewed in 2022 and the company is currently working on renewing it again. Orange is also engaged in the Positive.ai initiative, and has received the positive.ai label in 2024, both on organisation and AI product level evaluation. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Internal and external red-teaming of models or systems | North America | Information technology | Google rolled-out internal and external red-teaming of models or systems, targeting areas such as misuse, societal risks, and national security concerns, including biosecurity and cybersecurity. At the same time, Google implemented ongoing systematic adversarial testing, safety and fairness evaluations in multiple languages, and performance monitoring for major generative AI product launches to ensure compliance with content safety policies. An internal company-wide LLM red teaming “Hack-AI-thon” engaged security and responsible AI experts, resulting in over 2,600 safety-focused conversations that enhanced the technology’s safety and security. Additionally, a dedicated AI red team was established to address various risks, including security, abuse, bias, and other societal concerns. | |
Cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights | Global | Information technology | Google is developing open-source tools and infrastructure to support the security of its models, along with internal standards and technical controls to ensure the provenance, confidentiality, and integrity of these models across the company. This initiative is part of a broader framework aimed at the safe and secure development and deployment of AI systems, providing a structured approach to safeguard these systems and enhance trust in their use. | |
Third-party discovery and reporting of issues and vulnerabilities | North America | Information technology | Google expanded its Bug Hunter Program (including Vulnerability Rewards Program) to reward and incentivise anyone to identify and report vulnerabilities in Google AI systems. The Bug Hunter Program, part of the Bug Hunting Community is an international group that aims to keep Google products and the Internet safe and secure. The Bug Hunter Program includes a three-step process of preparation, to get inspiration from the community and access resources to start hunting. The second step involves the reporting phase where hunters can report a security vulnerability, which will lead to the third step, which enables bug hunters to collect their bugs as digital trophies and earn paid rewards. The Bug Hunter Program allowed for proactive investment in security researchers who tested products for vulnerabilities, as well as identify and fix software vulnerabilities using AI. | |
Safe, secure, and transparent development and use of generative AI | North America | White House commitments on safety, security, and trust | Information technology | To date, the US Government secured the voluntary commitment of 16 US companies to promote the safe, secure, and transparent development and use of generative AI (foundation) model technology. Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices – such as red-team testing and the publication of transparency reports – that will propel the whole ecosystem forward. The companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional commitments beyond those included here. |
Advancing the safe development and deployment of frontier AI systems | Oceania; Northern America; Middle East and North Africa; Latin America and the Caribbean; Europe; Asia; Africa; | Amazon, Anthropic, Google, Meta, Microsoft, and Open AI | Information technology | The Frontier Model Forum is an industry non-profit dedicated to advancing the safe development and deployment of frontier AI systems (general purpose AI models that constitute the state of the art). The Forum aims to (1) identify best practices and support standards development for frontier AI safety, (2) advance independent research and science of frontier AI safety, and (3) facilitate information sharing about frontier AI safety among government, academia, civil society and industry. The Forum leverages the technical and operational expertise of its member companies to benefit the entire AI ecosystem, advancing AI safety research and supporting the development of AI applications to address society’s most pressing needs. More specifically, the Forum supports efforts to create applications that address issues like climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats. Additionally, it identifies best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of AI technology. |
Developing and deploying frontier AI models and systems responsibly | Africa; Asia; Europe; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Amazon, Anthropic, Cohere, Google, G42, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Naver, OpenAI, Samsung Electronics, Technology Innovation Institute, xAI, Zhipu.ai | Information technology | At the May 2024 AI Safety Summit in Seoul, 16 companies and organisations agreed to Frontier AI Safety Commitments. The commitments are grouped into three outcomes: (1) Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems; (2) Organisations are accountable for safely developing and deploying their frontier AI models and systems, and (3) Organisations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments. Signatories also committed to publishing a frontier AI safety policy before the next AI Safety Summit to be held in France in early 2024. These policies will set out how the organisation will identify and mitigate public safety and security risks of highly capable models, including setting risk thresholds that will guide decisions on whether and how to deploy a model. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Explainable AI systems for research | Europe | Positive Group | Information technology | Positive Veritas operates a tax and legal research platform built on the premise of AI-driven workflow acceleration. To responsibly deploy complex AI systems in a domain where accuracy and completeness are of the essence, novel technical measures were developed to keep AI models on track and ground them in reality, whilst ensuring that every output from an AI system is explainable and traceable back to the precise real-world data used to derive it. In effect, the generation of false or misleading information is now substantially rarer, and all outputs by AI systems are verifiable. |
Annual Responsible AI Transparency Reports | Africa; Asia; Europe; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Microsoft | Information technology | In July 2023, Microsoft committed to publishing an annual report on its responsible AI programme. The inaugural report was published in May 2024 . The annual report aims to share Microsoft’s maturing responsible AI practices, reflect on what it has learned, chart its goals, hold itself accountable, and earn the public’s trust. The first report provided insights into how Microsoft builds applications that use generative AI, manage risks, make decisions and oversee the deployment of those applications. It also shared details of how the company supports its customers to build their own AI applications responsibly. |
AI in international arbitration | Global | Silicon Valley Arbitration & Mediation Center (SVAMC) | Legal services | The Silicon Valley Arbitration & Mediation Center (SVAMC) Guidelines on the Use of Artificial Intelligence in Arbitration introduce a principle-based framework for the use of AI tools in arbitration. The guidelines are meant to serve as a point of reference for arbitral institutions, arbitrators, parties, their representatives, and experts. They raise awareness of the responsibility of all participants using AI tools in connection with an arbitration to make reasonable efforts to understand each AI tool’s limitations, biases, and risks, and to the extent possible mitigate them. The guidelines navigate the responsibility to safeguard confidentiality, disclosures of using AI, duty of competence or diligence in the use of AI, respect for the integrity of the proceedings and evidence, among other issues. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Safe Human-AI teaming in finance | Europe | Positive Group | Information technology | Mantle helps finance professionals automate document processing, transaction classification and analysis. Often, these processes can be executed autonomously with high accuracy. In the remainder, operators must exercise judgement and make use of context that AI systems lack. To establish effective supervision over systems designed to take action on behalf of humans, Mantle committed to a set of AI safety principles and deployed strong technical guiderails to ensure that (1) all actions recommended by AI systems are auditable and interpretable at every step; (2) all actions recommended by AI systems are reviewed and approved by an operator prior to execution; and (3) operators can control what AI systems can and cannot see and what they can and cannot do on behalf of humans. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Environmental Sustainability and AI | Africa; Asia; Europe; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Microsoft | Information technology | Microsoft has taken measures to manages its AI data centres in an environmentally sensitive manner and uses AI to advance environmental sustainability needs. Microsoft recognises that its data centres play a key part in achieving its goals to be carbon negative, water positive and zero waste by 2030. Microsoft has made long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build data centres and operate. It also applies a holistic approach to the Scope 3 emissions relating to its investments in AI infrastructure, from the construction of its data centres to engaging its supply chain. This includes supporting innovation to reduce the embodied carbon in its supply chain and advancing its water positive and zero-waste goals throughout our operations. At the same time, Microsoft recognises that AI can be a vital tool to help accelerate the deployment of existing sustainability solutions and the development of new ones. For example, AI can help to expedite the integration of renewables onto electric grids, develop energy storage solutions, reduce food waste, foster the creation of high carbon-absorbing materials, and enable accurate weather forecasting weeks or even months in advance of current capabilities. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
AI enabling intelligent manufacturing | Asia | TCL Technology Group Corporation | Information technology | Since 2015, TCL has positioned intelligent manufacturing as the core of its development strategy, leveraging artificial intelligence technology to enhance production efficiency and promote the transformation of factories into smart manufacturing facilities. This successful transition has driven the company’s growth. In 2018, to foster technological innovation and industry collaboration, TCL incubated Getech, an industrial internet platform. Getech is committed to sharing TCL’s advanced manufacturing insights and technologies worldwide. By sharing advanced manufacturing technology and management experience, it helps partners enhance their competitiveness. Moving into 2021, TCL further launched the “Sunrise Initiative,” deepening cooperation with international partners, sharing resources, and jointly promoting the innovative upgrade of the manufacturing industry. The plan is dedicated to building a strong capability ecosystem, gathering collective wisdom and efforts to accelerate the digital transformation of the manufacturing industry, promote the sustainable development of the industry, and enhance market competitiveness. |
Evaluating Orange partner’s responsible AI maturity | Europe | Orange | Information technology | Orange has several partnerships with big players but also with small businesses, making it a key actor in the responsible AI value chain. Through Orange’s early advances around responsible AI governance, they are convinced they can work with big businesses for alignment, transparency and responsibility, as well as giving trainings and guidelines for smaller ones that are not yet up to speed. In this context, Orange has conducted a POC with several companies, that deliver services involving partial or full AI implementation, to implement an evaluation form of responsible AI maturity. This enables Orange to track some of their contractors’ level of maturity, review the actions they were considering and decide whether or not this is compliant with Orange’s responsible AI standards. Orange is working to include this approach in its compliance process for contractor evaluation. |
Case studies – AI policy projects
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Enhance detection of AI-generated media | Global | Information technology | Google is actively investing in tools for synthetic media detection and incorporating watermarking and metadata into models from the outset. Organisations, including Google, often join initiatives such as the Partnership on AI (PAI) Synthetic Media Framework and launched SynthID, a beta tool for watermarking and identifying AI-generated images. Furthermore, Google is also expanding election ad policies to require disclosure of digitally altered or generated content is becoming a standard approach, especially in countries with election ad verification processes. | |
Combatting deceptive use of AI in elections | Africa; Asia; Latin America and the Caribbean; Europe; Middle East and North Africa; Northern America; Oceania; | Tech Accord to Combat Deceptive Use of AI in 2024 Elections | Information technology | In February 2024, 20 companies announced a Tech Accord to Combat Deceptive Use of AI in 2024 Elections. The Accord is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem. More info: https://www.aielectionsaccord.com/ |
Addressing the prevalence of misleading information online | Asia; Europe; Africa; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Coalition for Content Provenance and Authenticity (C2PA) | Info technology, news, and media | The Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. Its members aim to do so by collectively building an end-to-end open technical standard to provide publishers, creators, and consumers with opt-in, flexible ways to understand the authenticity and provenance of different types of media. The concept of media provenance is the idea of binding basic facts to a piece of media in a cryptographically secure way, with the technology ensuring that media content remains unaltered from its original source. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
AI in action: accelerating progress towards the SDGs | Global | Activities of member organisations | Advances in AI are helping tackle a growing number of societal challenges, demonstrating technology’s increasing capability to address complex issues, including those outlined in the SDGs. Despite global efforts, 80 percent of SDG targets have deviated, stalled, or regressed, and only 15% are on track as of 2023, illustrating the urgency of accelerating efforts to meet the goals by 2030. Google’s internal and collaborative research, technical work, and social impact initiatives show AI’s potential to accelerate action on the SDGs and make substantive progress to help address humanity’s most pressing challenges. Examples include: (1) machine learning with satellite imagery and survey data can extract socioeconomic indicators to generate visualisations and predictions of poverty in areas lacking survey data (2) machine learning used with satellite imagery, localised food prices, and conflict data to predict and address severe acute childhood malnutrition and forecast food insecurity for early action (3) natural language processing on conversations between educators and non-English-speaking parents, coupled with machine learning analytics on parent and student profiles, can construct a multilingual family engagement platform and deliver personalized resources to help parents digitally connect with teachers regardless of language | |
Innovation and AI Accessibility | Africa; Asia; Europe; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Microsoft | Information technology | Microsoft supports projects that use technology to empower people living with disabilities and prioritises partnerships to rapidly accelerate how people with disabilities are included in and represented by the systems, designs and features of technology, including AI. A data desert is the lack of data for a particular group which can lead to scientific discoveries and innovations developed without those groups being represented, limiting their utility. Partnerships to address those gaps are therefore a key priority to help accelerate the development of accessible AI solutions to benefit the 1 billion-plus people worldwide with disabilities. Microsoft partners with developers, universities, non-governmental organisations, startups and inventors to take an AI-first approach in creating solutions that enhance work and life for and with global disability communities. As projects are completed, outcomes and learnings are shared and published on Microsoft’s AI for Accessibility website. |
Microsoft’s AI Access Principles | Africa; Asia; Europe; Latin America and the Caribbean; Middle East and North Africa; Northern America; Oceania; | Microsoft | Information technology | In February 2023, Microsoft announced commitments to enabling AI innovation and fostering competition by making its cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. As part of these commitments, Microsoft is making AI models and development tools broadly available to software applications developers around the world, making available public APIs to enable developers to access and use AI models hosted on Microsoft’s cloud services; and supporting a common public API to enable network operators to support software developers. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
AFL-CIO partnership on AI and the future of the workforce | Northern America | Microsoft | Information technology | In December 2023, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) and Microsoft announced the formation of a new partnership to create an open dialogue to discuss how AI must anticipate the needs of workers and include their voices in its development and implementation. Its three goals are to (1) share in-depth information with labor leaders and workers on AI technology trends; (2) incorporate worker perspectives and expertise in the development of AI technology; and (3) help shape public policy that supports the technology skills and needs of frontline workers. |
AI-empowered medical processes to accelerate productivity and ensure patient safety | Asia | China Mobile (Chengdu) Industrial Research Institute | Information technology | The JIUTIAN Medical Large Language Model, developed by China Mobile (Chengdu) Industrial Research Institute, addresses potential risks of AI in healthcare through human capacity building, skills training, and AI research investment. Key initiatives include: – Human capacity building: Collaborating with several leading medical institutions to train healthcare professionals in AI-assisted medical practices, enhancing their ability to use AI tools effectively and safely. – Skills training: Developing comprehensive programmes for medical staff on using the model in pre-hospital emergency rescue, in-hospital consultation, and post-hospital follow-up scenarios, ensuring responsible and efficient AI use. – AI research and development: Investing significantly in model refinement through pre-training and fine-tuning with extensive medical data, employing advanced data governance methods for accuracy and reliability. – Ethical AI development: Implementing rigorous ethical guidelines and privacy protection measures, addressing patient data security concerns and promoting responsible AI use. – Practical validation: Partnering with medical institutions for real-world testing, ensuring the model’s effectiveness and safety in actual medical scenarios. This project demonstrates China Mobile’s commitment to responsible AI development in healthcare, focusing on addressing potential risks while promoting innovation and improving patient care through advanced language model technology. |
Implementing AI principles and guidelines in company policies | Region(s) of focus of the case study | Organisation | Business sector(s) involved in the case study | Description the case study |
---|---|---|---|---|
Enhance detection of AI-generated media | Global | Information technology | Google is actively investing in tools for synthetic media detection and incorporating watermarking and metadata into models from the outset. Organizations, including Google, often join initiatives such as the Partnership on AI (PAI) Synthetic Media Framework and launched SynthID, a beta tool for watermarking and identifying AI-generated images. Furthermore, Google is also expanding election ad policies to require disclosure of digitally altered or generated content is becoming a standard approach, especially in countries with election ad verification processes. | |
AI in action: accelerating progress towards the SDGs | Global | Activities of member organisations | Advances in AI are helping tackle a growing number of societal challenges, demonstrating technology’s increasing capability to address complex issues, including those outlined in the United Nations (UN) Sustainable Development Goals (SDGs). Despite global efforts, 80 percent of SDG targets have deviated, stalled, or regressed, and only 15 percent are on track as of 2023, illustrating the urgency of accelerating efforts to meet the goals by 2030. Google’s internal and collaborative research, technical work, and social impact initiatives show AI’s potential to accelerate action on the SDGs and make substantive progress to help address humanity’s most pressing challenges. Examples include: (1) machine learning with satellite imagery and survey data can extract socioeconomic indicators to generate visualizations and predictions of poverty in areas lacking survey data (2) machine learning used with satellite imagery, localised food prices, and conflict data to predict and address severe acute childhood malnutrition and forecast food insecurity for early action (3) natural language processing on conversations between educators and non-English-speaking parents, coupled with machine learning analytics on parent and student profiles, can construct a multilingual family engagement platform and deliver personalized resources to help parents digitally connect with teachers regardless of language | |
AI-empowered medical processes to accelerate productivity and ensure patient safety | Asia | China Mobile (Chengdu) Industrial Research Institute | Information technology | The JIUTIAN Medical Large Language Model, developed by China Mobile (Chengdu) Industrial Research Institute, addresses potential risks of AI in healthcare through human capacity building, skills training, and AI research investment. Key initiatives include: Human Capacity Building: Collaborating with several leading medical institutions to train healthcare professionals in AI-assisted medical practices, enhancing their ability to use AI tools effectively and safely. Skills Training: Developing comprehensive programs for medical staff on using the model in pre-hospital emergency rescue, in-hospital consultation, and post-hospital follow-up scenarios, ensuring responsible and efficient AI use. AI Research and Development: Investing significantly in model refinement through pre-training and fine-tuning with extensive medical data, employing advanced data governance methods for accuracy and reliability. Ethical AI Development: Implementing rigorous ethical guidelines and privacy protection measures, addressing patient data security concerns and promoting responsible AI use. Practical Validation: Partnering with medical institutions for real-world testing, ensuring the model’s effectiveness and safety in actual medical scenarios. This project demonstrates China Mobile’s commitment to responsible AI development in healthcare, focusing on addressing potential risks while promoting innovation and improving patient care through advanced language model technology. |