Paris AI Action Summit readout: China establishes AI Safety Institute, setting stage for greater global collaboration
Emergence of DeepSeek, other advanced Chinese models suggests collaboration needed around rules for frontier models, but outcomes in Paris suggest road ahead will be challenging
The author participated in several landmark meetings during the Paris AI Action Summit this week that saw the unveiling of the Chinese Artificial Intelligence Safety Institute, a collaborative effort of the Chinese AI safety network and ecosystem. This is a significant development in the process of establishing some capacity among government-affiliated organizations to assess and test the risk around advanced AI models. The so-called Bletchley Park process, begun in November 2023, has as one of its components the promotion of collaboration between safety institutes to work with industry and assist regulators in determining how best to reduce the risks posed by advanced AI models. This effort is also related to the growing number of companies that are developing internal responsible AI model scaling policies.
The convergence of these efforts could be critical to establishing a global approach to AI safety, and China will play a major role here, given the number of innovative AI firms in the country, including DeepSeek along with many others; a number of Chinese firms were present at the Paris Summit. From China, Vice Premier Zhang Guoqing attended the Summit, which was co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi. German Chancellor Olaf Scholz and European Commission President Ursula von der Leyen were also be in attendance. This article is an initial look at the meaning of the new Chinese AI Safety Institute and prospects for it to play a future role in AI governance frameworks.
In the runup to the Paris AI Action Summit, a number of leading model developers and key infrastructure companies announced frontier AI model safety commitments (FAISC). For a good summary and links to these commitments, see here. As models become more capable, it is increasingly clear that there is a need for companies to operationalize these commitments and internally organize around them as a baseline for assessing when models could reach advanced enough levels to contribute meaningfully to the development of CBRN (chemical, biological, radiation, and nuclear) weapons capabilities as well as reach a certain level of autonomy to generate and deploy code. Companies including DeepMind, Meta, G42, Cohere, and Microsoft all released new FAISC documents before Paris. Anthropic and OpenAI had already released earlier commitments. The commitments are aligned somewhat with the so-called Seoul Commitments, which came out of the May 2024 May AI Safety Summit. Among Chinese AI firms, only Zhipu.ai signed on to the Seoul Commitments, but several more signed on just before the Summit (see below).
In addition, during the Summit, at a side event meeting that included AI thought leaders such as Stuart Russell and leading AI safety researchers, the sizeable Chinese delegation officially unveiled the China AI Safety & Development Network (CNAISDN). While the English translation was Association, leaders indicated that they planned to use Network, given that this is a consortium of leading Chinese AI organizations. Leading Chinese AI research institutions, including Tsinghua University, Peking University, Chinese Academy of Sciences, China Academy of Information and Communications Technology, Beijing Academy of Artificial Intelligence, and Shanghai Artificial Intelligence Laboratory have jointly established the CNAISDN, which will serve as China’s official Artificial Intelligence Safety Institute (AISI), representing China in dialogues and collaboration with AI security research institutions around the world.
Introduction to China AI Safety and Development Association
The CNAISDN released this statement describing its mission:
Safe AI governance concerns the interest of all humanity and requires global attention and participation. In 2023, China launched Global AI Governance Initiative, calling for “uphold the principles of wide participation and consensus-based decision-making, adopt a gradual approach, pay close attention to technological advancements, conduct risk assessments and policy communication, and share best practices. On this basis, we should encourage active involvement from multiple stakeholders to achieve broad consensus in the field of international AI governance, based on exchange and cooperation and with full respect for differences in policies and practices among countries.”
To implement this initiative, with the support of the Chinese government, the major Chinese AI research institutions* have gathered the country’s technological capabilities and intellectual capital in AI development and safety research and established the China AI Safety and Development Association. This organization is China's equivalent to the Artificial Intelligence Safety Institute (AISI), representing the Chinese side in dialogues and collaboration with AISI around the world.
The 2024 France-China Joint Statement on Artificial Intelligence and Global Governance particularly emphasized the significance of strengthening international cooperation in the context of rapid technological advancement to ensure international security and stability while respecting sovereignty and fundamental rights. Guided by this important consensus, we are hosting a side event during Paris Action Summit, focusing on enhancing understanding of China’s AI development and safety governance approaches while seeking insights into other countries' developments in this field, thereby contributing to global cooperation in AI development and governance.
*Note: Members include Tsinghua University, Peking University, Chinese Academy of Sciences, China Academy of Information and Communications Technology, China Electronics and Information Industry Development Research Institute, Beijing Academy of Artificial Intelligence, Shanghai Artificial Intelligence Laboratory, Shanghai Qi Zhi Institute.
The CNAISDN is complex, with organizations based around China and working with city governments, industry, and national government ministries. The CNAISDN is also intertwined with China’s emerging AI interagency that includes a set of ministries and commissions that are stakeholders in China’s AI governance system. These include the Ministry of Industry and Information Technology (MIIT), the Cyberspace Administration of China (CAC), the Ministry of Science and Technology (MOST), the National Development Reform Commission (NDRC), and the Ministry of Foreign Affairs (MoFA). City governments in Beijing and Shanghai are also major players here. While the emergence of this network as a functioning AI SI institute has involved and will involve complexity, potential rivalries, and extensive coordination with stakeholder ministries, the technical talent of the individual organizations appears very strong, and the broader ecosystem includes a number of capable technology companies involved in areas such as model testing for specific types of risks.
With China now putting forward the Network as its AI Safety Institute, and being eager to participate in the AI SI network, the geopolitical challenges will be significant, given what will likely be firm opposition from the US, but strong support from the UK, France, and other key players. The next several months will be critical in determining the way forward here, but Chinese officials the author spoke to at the Paris Summit, and the broader AI safety community are certain that the CNAISDN and related organizations—some already participating in well established Track 2 dialogues—will continue to seek new ways of collaborating with willing partners on AI safety issues, regardless of geopolitical pressures. The potential for the nascent AI SI process to splinter is also real, but for now there will be an attempt to probe the political waters to see how the drivers of the process will want to treat the new Chinese AI SI, given the participation of more Chinese companies in the voluntary commitments—potentially including DeepSeek later in the year.
One of the challenges is that the focus of China’s AI safety ecosystem has been more on risks around issues such as data privacy, fraud, and content, and less on the national security concerns such as cybersecurity, CBRN, and autonomy issues at the core of the work of the UK, US, and other safety institutes. Nevertheless, the Chinese AI safety ecosystem has developed substantial technical experience around testing models against a suite of specific risks, and this experience would be relevant in tackling the higher level national security risks going forward. The new Chinese AI Safety Commitments, released in December, represent a major development, and one focus of the new CNAISDN could be to try to establish an effort to harmonize these commitments with the Seoul Commitments. This will be challenging given the different areas of focus on the two sets of commitments. Most of the leading Chinese AI model developers, including DeepSeek, Baidu, Tencent, Huawei, Alibaba, and Zhipu, have signed on to the Chinese commitments. Scott Singer of Carnegie has an excellent assessment of the differences between the two sets of commitments, a comparative list of companies that have signed on to the two lists, and the importance of the commitments to the overall process of establishing a framework here.
Broader AI Action Summit outcomes and US China competition on AI
France from the very beginning indicated that the AI safety issue would be downplayed, with more attention given to AI innovation, development, and deployment; hence the name AI Action Summit. The Summit organizers laid out a very ambitious program, with five major themes. President Macron added Indian PM Narendra Modi as co-chair of the event late in the game.
A big part of the Summit was about investment in France’s AI infrastructure—sort of a France and EU coming out on AI innovation. There were announcements of major investments: €109 billion in France, including a major investment from the UAE totaling $5 billion over six years. In addition, EU President Ursula Von Der Leyen pledged investments totaling €200 billion, including a plan to build major AI datacenters.
Downplaying of the safety issue did not go well at the Summit. A large contingent of academic, NGO, and non-profit organizations are heavily committed to the safety issue, and conducted a series of very well organized events and deep discussions on the issue around the official Summit events. French organizers cancelled a subset of AI safety-focused panels at the last minute that had originally been on the schedule for official events at the Grand Palais. The State of the Science report by Yoshua Bengio was issued before the summit, but a panel on the report was downgraded to side event. Bengio and other proponents of the need to focus on existential risks, such as Stuart Russell, were active in a number of the side events, many of which addressed the safety issue. There were some clear tensions in the air around ML/AI co-founder Yann LeCun, who is very close to Macron and leading French open source/weight model developer Mistral, and the other AI/ML confounders including Bengio. LeCun’s position is that AI development is not that close yet to AGI, and losing control is unlikely. He contends humans will remain in control of the engineering of advanced AI.
The emphasis on safety is a complex issue. All the leaders of the major labs, including and especially Sam Altman of OpenAI and Dario Amodei of Anthropic, have suggested, including during interviews at Davos, that AGI is not far off. Timeframes come down to between one and three years depending on the company or individual. Altman has stated, “We know how to get to AGI.” After the summit, Amodei lamented the de-emphasizing of AI safety as a lost opportunity in an important essay. So labs themselves appear to be getting to the point were they believe some of the risks around responsible scaling are getting close. Responsible scaling is about establishing internal processes to do additional testing if models pass threshold where they are able to contribute to CBRN weapons/capabilities development in new ways—or even determine that a specific model should not be released. Autonomy has been added recently to the mix of risks, which includes the issue of the development of virtual software engineers who will be as capable as leading human coders, allowing companies to throw millions of programmers at a problem—as I have written, this is one of the justifications US officials have used to justify export controls to slow down the ability of Chinese firms to develop advanced models. The issue of agentic AI is also becoming more salient, as the major labs develop agents, or “virtual collaborators” as Anthropic refers to them, that will act on our behalf within enterprises and as consumers. For more on this, see AI Decrypted 2025. Some experts on AI safety thought that it was possible that a broader consensus could be reached around agentic AI, given the potential for these platforms to have real world impacts.
Both the US and UK government chose not to sign the final declaration. The UK is concerned that the Summit agenda and final declaration did not sufficiently address safety issues, and the US concurred here. The UK is mostly focused on progress on the AI safety issue, the Safety Institute process, and getting more companies to sign on to frontier AI scaling commitments. UK officials and AI safety advocates believe there is an increasing need to move towards some agreement on testing of models and doing actual testing as model capability rapidly improves. The New Trump administration in the US is just starting to develop AI policy positions. During his speech at the Summit, US Vice President JD Vance pushed a lot of themes that the new administration will be advocating, such as less regulation, the US desire to dominate the industry and weaponize US company dominance in advanced semiconductors, and implement polices such as the AI Diffusion Rule. The US AI policy ecosystem is starting to take shape, and it is clear it will take some different approaches to AI safety than the Biden team. However, on China and AI, there will likely be even more pressure. The UK and US differ widely on issues like the participation of China in the AI Safety Institute network.
“Facing the opportunities and challenges of the development of artificial intelligence, the international community should join hands to advocate intelligence for good, deepen innovation cooperation, strengthen inclusiveness and universal benefit, and improve global governance. China is willing to work with other countries in the field of artificial intelligence to promote development, protect security, share results, and jointly build a community with a shared future for mankind.”—Chinese VP Zhang Guoqing at Paris AI Action Summit
The role of China in the AI Summit and AI Safety Institute network process. This is one of the most complex issues to come out of the Paris Summit. During a meeting last year as part of a dialogue on AI governance1, President Macron invited Chinese participation, and Xi complied by sending a high level delegation, headed by Vice Premier Zhang Guoqing. As noted, the Summit also saw the official launch of the Chinese AI Safety and Development Network, which will serve as China’s AI Safety Institute. It is now clear that after much bureaucratic wrangling among key Chinese ministries involved in AI regulation and promotion, and a number of key players in China’s expansive AI safety ecosystem, Beijing would like to participate in the AI Safety Institute process. This was made clear in comments at the Summit by senior Chinese officials. However, the US government, starting with the Biden administration’s Commerce Department, has so far has resisted allowing Chinese participation in the AI SI network process. The CNAISDN is a very capable group of academic and research organizations. China also has a number of companies that develop solutions to real world risks around AI deployments, such as facial recognition fraud detection. As noted, the China AI Safety community is more focused on these types of risks, but is also now putting more emphasis on national security risks such as CBRN, which has been the entire focus of US and UK AI SI efforts. So while there is not complete alignment of the priorities of the CNAISDN and the AI SI network, it is clear that there is room for collaboration to advance the AI SI process.
Leading Chinese AI firms, in consultation with the government and AI safety community, are becoming more willing to sign on to voluntary commitments, though each company has a different level of in-house capacity to begin implementing elements of the commitments. Just before the Summit, two more leading Chinese AI model developers signed on to the Seoul Commitments: Kaifu Lee’s 01.A1 and Minimax, along with Nvidia and Magic. DeepSeek was also invited, but the small startup firm is not ready to commit to this type of agreement, and is planning to invest more in capabilities around AI safety before considering joining the commitments. As I have noted, the firm has limited personnel, and its business model remains unclear, despite all the attention over the past month. Apple, for example, considered using DeepSeek for model support for Apple Intelligence, but determined that the company lacked sufficient manpower to support such a complex effort. For more on DeepSeek, see my previous posts in this Substack.
DeepSeek and open source/weight models will remain a salient topic within the context of industry developments and US China competition in this domain. Significantly, this week former Google CEO Eric Schmidt, an influential figure within US AI policy circles, suggested that he supported US China cooperation on AI safety, specifically focusing on open source/weight models. Schmidt’s message at the AI Action Summit was complex, as he argued that the US needed to focus more on open source models to avoid seeing Chinese firms dominate the development of these models. At the same time Schmidt suggested that US China competition should not preclude US China cooperation on AI safety.
“….the west [should] collaborate with the Chinese on AI safety, as the countries would face the same issues around the powerful technology. How could it possibly be bad for us to give them information that they could use to make their models more safe?”—Eric Schmidt
After the apparent snub of VP Zhang by JD Vance, who walked out of a dinner when Zhang endorsed a role for the UN in AI governance, and given the tone of Vance’s statements, it will be important to watch how China reacts going forward. Beijing and President Xi have been very critical of US export controls and the AI Diffusion Rule, for example, and will likely want to include discussions of these issues in what is likely to be a lengthy period of in-depth negotiations over a potentially broad trade and economic deal.
Looking forward: next six months critical to getting traction on truly global AI safety effort
The next summit will be in India, and apparently Modi has agreed to put Safety back in the title for this meeting. This was likely in response to the intense criticism of the French approach. But the version in India will also have a huge focus on development of the local model ecosystem, infrastructure buildouts, deployment of AI across critical sectors. India is likely to welcome Chinese participation, barring any serious resumption of border hostilities or other unexpected events.
The status of the AI SI network remains unclear, given the apparent lack of support from the Trump administration for the process. There is general recognition of the importance of safety in the national security context, but the Trump AI team wants to play down anything that sounds like it could lead to regulation. Here one of the arguments is that regulation could put leading US players at a disadvantage to Chinese companies, as China has adopted a very light regulatory touch so far. At the same time, the UK is eager to do more on the AI SI front, and to include China in the process.
Industry developments are likely to continue to mean that more capable models will drive increasing concern on the safety side. The other major issue is the debate about open source/weight models versus closed proprietary models, galvanized by the release of DeepSeek. Already there is growing debate in Washington about what to do about DeepSeek. Regulatory actions from the Trump administration are likely coming, centered on a possible ban on DeepSeek, controls on Nvidia GPUs such as the H20, and major attention to how to implement the AI Diffusion Rule.
In subsequent articles, among other topics, I will examine in more depth the growing support for open source/weight models in China, and the role of key player such as DeepSeek and Alibaba in this arena.
Sampling of responsible scaling and other AI safety announcements ahead of Paris AI Summit
Amazon
Anthropic
Cohere
G42
IBM
Meta
Our Approach to Frontier AI | Meta Feb 2025
Microsoft
Microsoft Responsible AI Standard v2 General Requirements 2022 (latest edition)
Responsible AI Transparency Report May 2024
Mistral AI
Naver
AI Research | CLOVA – NAVER safety site
OpenAI
OpenAI’s Economic Blueprint | OpenAI Jan 2025
Introducing ChatGPT Gov | OpenAI Jan 2025 – ChatGPT for Gov agencies
Samsung Electronics
Chinese AI Firms
守护AI安全,共建行业自律典范——首批17家企业签署《人工智能安全承诺》
Domestic Chinese AI safety initiative featuring 16 prominent Chinese firms. Document bears similarities to Seoul Commitments
In May 2024, China and France released a joint declaration on AI and global governance during Chinese President Xi Jinping's state visit to France, with the two heads of state agreeing to take measures to work closer on addressing AI risks, strengthening cooperation and global governance of AI to promote "secure, reliable, and trustworthy AI."
Thanks for the read-out. Super interesting and slightly concerning because the last few months, it's felt like AI safety has subsumed AI ethics, but now AI apathy may be subsuming both (at least at the US government level). Hopefully the restoration of "Safety" to the India summit title means that people are committed to it being a productive session, but I'm skeptical...