Before regulation, it’s “best practices”: G7 agree to draft an AI Code of Conduct for private sector

Since June, G7 have been negotiating about the ongoing AI revolution and this Code of Conduct is just a first, voluntary step.

The Group of Seven (G7) industrial countries is set to establish a voluntary code of conduct for companies involved in developing advanced artificial intelligence (AI) systems. This move, aimed at mitigating risks and misuse of AI technology, comes in response to growing concerns about privacy and security. The G7, comprising Canada, France, Germany, Italy, Japan, Britain, and the United States, along with the European Union, initiated this process during a ministerial forum known as the “Hiroshima AI process” in May.

The 11-point code focuses on ensuring safe, secure, and trustworthy AI on a global scale. It offers voluntary guidance for organizations engaged in advanced AI system development, including foundation models and generative AI systems. The primary goal is to maximize the benefits of AI while addressing associated risks and challenges. The code encourages companies to identify, assess, and mitigate AI-related risks throughout the technology’s lifecycle, along with addressing misuse patterns post-market release. Companies are also encouraged to provide public reports about AI capabilities, limitations, and both the responsible and irresponsible use of AI, as well as to invest in strong security measures.

We are still far from a comprehensive framework for AI regulation. First, “best practices” should emerge. (Source: istockphotos)

The European Union has been proactive in regulating AI through its AI Act, while other countries such as Japan and the United States have adopted a more hands-off approach to stimulate economic growth. The G7 code of conduct, as a voluntary measure, is seen as a step towards ensuring safety and bridging the gap until comprehensive regulations are established.

Related Articles