Laying on the DeepSeek FUD
Parsing recent comments from anonymous US government official and the continuing effort to get China's AI sector wrong
FUD: fear, uncertainty, and doubt, usually evoked intentionally in order to put a competitor at a disadvantage
This week yet another strange commentary was released on DeepSeek via a media story, laden with all kinds of accusations—mostly old and discredited ones. As I have already debunked a lot of these, here I will focus on why these types of stories are being planted in the media and what is causing the obsession in some quarters with DeepSeek and with proving the firm is something other than what it actually is.
DeepSeek FUD remains in fashion, despite debunked claims
The latest comments from an “anonymous official at the State Department” come in a long line of stories about DeepSeek starting after January 20 that have alleged a host of things, from the mundane to the absurd. Without exception, all of these allegations, anonymous comments, ‘reports,’ analyses, and commentary are based only on so-called ‘open source’ information, meaning crawling the web to find tidbits about DeepSeek, links associated with researchers, and the like, and then providing ‘analysis’ of what these things may or may not mean about DeepSeek. Also, without exception, the bulk of this analysis is tendentious, misleading, or flat out wrong, and designed to stir fear in the hearts of everyone from US congress persons to venture capitalists to intelligence and military budgeters. The sheer scope of FUD and unsubstantiated innuendo aimed at one small AI startup in China—which developed frontier models without any government backing, is run by a hedge fund CEO and has not benefited from government industrial policy, does not possess 50,000 Hoppers as some allege, and has no obvious ties to China’s military or intelligence services—is mindboggling. Clearly, DeepSeek is doing something right.
Much has been written about the origins of the company, the conversion of CEO Liang Wenfeng into a believer in AI once hedge funds became difficult to operate, and the recruitment of staff with no academic experience outside of China. Then there is Liang’s refusal to take money from investors, let alone the state, in the months leading up to the release of the V3 and R1 models. Thee facts are all well documented. The campaign which began in January 2025 to link the firm to the Chinese government and supposed breaches of US export controls is fascinating, and sounds like a classic intelligence disinformation campaign, replete with supposedly authoritative ‘industry experts’ and well-connected think tankers whispering into the ears of US officials and Hill staffers, dropping well-timed aspersions up and down the halls of government buildings in DC.
Let’s start with the Reuters story, citing an “anonymous official,” which raised the following issues with respect to DeepSeek.
DeepSeek is supporting “military and intelligence operations.” Allegedly “this effort goes above and beyond open-source access to DeepSeek’s AI models.”
DeepSeek is sharing user information (and statistics!) with “Beijing’s surveillance apparatus.”
DeepSeek has access to H100 Nvidia GPUs, obtained in violation of US export controls. Here the story has changed from the 50,000 “Hoppers” alleged by industry sources and Scale AI founder Alexandr Wang, into some lesser but still “large volumes” of H100s. Other sources told Reuters that the number of H100s that DeepSeek may possess was “far smaller” than the 50,000 H100s (“Hoppers) that Wang claimed DeepSeek had in a January media interview.
“DeepSeek sought [!] to use shell companies in Southeast Asia to evade export controls, and DeepSeek is seeking to access data centers in Southeast Asia to remotely access US chips.” The official declined to say if DeepSeek had successfully evaded export controls.
“Our review indicates that DeepSeek used lawfully acquired H800 products, not H100” —Nvidia spokesperson
Let’s parse these allegations by the anonymous speaker, and the terms used here such as “may possess,” “sought,” “we understand,” etc.
“We understand DeepSeek is supporting ‘military and intelligence operations.’” This suggests that the conclusion is based on ‘open source’ analysis. Yes, clearly after the release and hoopla around R1 in January, many Chinese state-owned enterprises, military units, and private enterprises began using DeepSeek for various applications. To extrapolate from this that DeepSeek is “supporting military and intelligence operations” is exactly the kind of ‘open source’ ‘analysis’ that has been done for years on Chinese companies, and has led to the addition of Chinese firms such as Xiao, CATL, and Tencent to the Pentagon’s 1260H list, which tars companies as “military associated.” Xiaomi successfully fought the designation, which hinged apparently on zero actual evidence of any ties.1 The judge in this case called the process “deeply flawed.” In the case of DeepSeek, and the recent efforts to link the firm to military or intelligence organizations, the process appears just as or even more profoundly flawed.
In general, this is an intriguing and vague accusation. First, DeepSeek is a very small company, and it seems highly unlikely that DeepSeek is providing “support to China’s military and intelligence operations.” It is possible that DeepSeek is collaborating with a number of companies, some with ties to or cooperating with the military—but it is not even clear what the official is referring to. DeepSeek is likely collaborating with Huawei, for example; some administration officials may be making a link between the private sector company and China’s military. But as a long-time observer of Chinese industry noted to me recently, “…actual links with the Chinese military are usually very difficult to realize for private sector companies. But then there is the catch all civil military fusion. When all this gets into the media, it turns into ‘aiding China’s military and intelligence operations.’” Indeed—and it is also symptomatic of general analysis that relies only on open source intelligence, without deeper understanding of the Chinese systems or the individual companies involved. Like the Exiger report—see my comments here.
Of course, the publicly visible collaboration between the leading US AI labs and the Pentagon and defense contractors is now so deep that it appears US “civilian military fusion” in the AI domain far exceeds that of China. The unexamined nature and dangers of this trend are laid out nicely in a piece by Eurasia Group’s Institute for Global Affairs analyst Jonathan Guyer here. It is important to note also the increasingly close relationship between OpenAI, Anthropic, and xAI; and the US AI national security-focused safety community, which almost certainly includes people with close ties to the US intelligence community. No similar trend in China has been clearly documented. The open source nature of leading Chinese AI models from DeepSeek and Alibaba, and their use by other organizations, is very different than leading US labs with proprietary models working directly with the Pentagon and defense contractors. Given this reality, anonymous comments suggesting “we understand” anything about DeepSeek and its ties to China’s “military and intelligence operations,” strike seasoned observers of China’s AI sector (those who have actually spoken frequently with leading AI companies in China, and others with direct knowledge of critical relationships) as quite Orwellian.
Allegedly, “this effort goes above and beyond open-source access to DeepSeek’s AI models.” First, by definition, open source/weight models means anyone has access to the models. Second, DeepSeek is a small company, with researchers and developers focused on model development. Based on numerous conversations with industry participants in China, it seems very unlikely that DeepSeek is sending model developers to help with “military and intelligence operations.” This looks like clear disinformation. There are already numerous companies in China that are helping enterprises deploy DeepSeek models. This is part of the open source/model weight business model. After five years of working with AI algorithms, focusing on deep AI algorithm research, and taking no money from private or government sources, that CEO Liang would suddenly opt to send DeepSeek employees to “support military and intelligence operations,” and that Chinese organizations in these domains would either need to do this or that officials in Beijing would think this is a good idea, seems implausible. Again, the conclusion is that whoever came up with this formulation was extrapolating from open source intelligence, without a deeper understanding of how either DeepSeek or China operates. Anything is possible; some things are less likely.
The data sharing issue is a red herring and almost certainly just a scary throwaway. Most of the instances of DeepSeek running on Chinese servers are hosted by third party cloud services organizations, so no data is going back to DeepSeek. There are many other ways for the government to collect personal data; open source/weight AI models are not likely on the priority list. And no, for the millionth time, China’s national security and intelligence laws do not mandate that companies hand over data to “Beijing’s surveillance apparatus.” Offering up this long-repeated claim again does not make it true and there is ample evidence that it is not, such as the DiDi case. People who should know better repeat it like a mantra, without knwoeing whether it is true or what it would mean if it were even partially true, in terms of the sheer volume of data flows. Like “civilian military fusion,” it is another misunderstood throwaway line with no explanatory power. At least with the EU, where last week the German data commissioner asked Apple and Google to remove the DeepSeek app from their stores, the issue revolves around handling of personal data in conformance with EU data laws. Google is examining the issue, and DeepSeek has a fairly extensive data privacy policy that includes special provisions for the EU (EEA) and is quite clear about the location of servers in China, etc.
The H100/Hopper thing, again. This issue just won’t go away, despite my and others’ best efforts to explain its implausibility. Now, at least, we are down to “far less” than the 50,000, so there has been some progress. But the statements of Wang and others early this year claiming 50,000 Hoppers have been proven to be demonstrably false, via statements based on real knowledge of supply chains and customers, and more importantly from Nvidia itself (not to mention my own publicly shared conclusions based on extensive discussions in China with industry players). The H100 claims are all unsubstantiated—from 50,000 to 10. All the major companies in China developing AI models are in fact eager to avoid being placed on the Commerce Department’s Entity List and are hence strongly incentivized not to use smuggled or diverted GPUs, as I have previously noted. DeepSeek does have access to the 10,000 A100s purchased legally before the October 2022 export controls, and some number, certainly in the thousands, of H800s also purchased legally between October 2022 and October 2023. Liang has spoken frequently about the major constraint on DeepSeek having access to more advanced GPUs. That the anonymous official could not confirm anything other than Deepseek “sought” GPUs, suggests that the information this assertion is based on is pretty weak. The motivations behind these types of statements need to be examined, ranging from attempting to show that Chinese firms can only develop advanced models by skirting export controls to asserting that DeepSeek must have had access to more advanced GPUs to minimize the firm’s innovations and cost structures around training and inferencing.
DeepSeek is seeking to access data centers in Southeast Asia to remotely access US chips. There is currently no legal provision in US export control regulations that would prevent DeepSeek, or any other Chinese company developing advanced AI models, from accessing data centers outside China. None. So this is like saying that DeepSeek is exploring the market to see where it can access hardware services outside of China that it needs for its legitimate commercial interests because it is unable to obtain hardware for use in China. I am aware of other Chinese AI developers using AI data centers outside of China to access certain services, and those firms have determined that this does not violate any current regulations. Lumping this statement in with unproven insinuations that DeepSeek “sought” to obtain export controlled hardware is a typical tactic used in intelligence reporting based on fragmentary open source intelligence.
Mischaracterizing DeepSeek’s origins, funding, support, and intentions
Then there is an interesting paper from RAND on Chinese industrial policy around AI. While acknowledging that,
“China’s private-sector companies, such as DeepSeek, have led the development of AI rather than state firms, suggesting that the private sector may have the advantage in driving innovation in this sector,”
the study then includes DeepSeek in a chart called “China’s AI Tech Stack and Industrial Policy” and appears to suggest that DeepSeek has benefited from various national investment funds, guidance funds, AI pilot zones, local government investment funds, talent incentives, startup programs, subsidized compute vouchers, and state-backed AI labs, along with the broad catchall, “promotion of open-source models and frameworks” as part of China’s industrial policy around AI. Let’s be clear: DeepSeek has benefitted from none of these things, least of all government “promotion of open-source models and frameworks.” Even after January 20, when the Chinese government signaled that government department and state-owned enterprises should use AI models, including DeepSeek, the commercial benefits to DeepSeek remain minimal, and the firm’s models, as the paper admits, were developed without government backing. Nor was the firm’s creation a product of industrial policy.
If we try hard enough, we can link anything to state backing and industrial policies, such as the state-backed education system that produced DeepSeek’s engineers, but trying to paint DeepSeek as a product of—or even weakly associated with—Chinese industrial policy is quite a stretch. This, in fact, is part of the reason for the other attempts to link DeepSeek to the Chinese government: despite all the government support, the 2017 National AI Development Strategy, all the investment vehicles cited in the RAND table, etc., no one could believe how good the firm’s models were in January. As I have noted, in addition to Silicon Valley, other Chinese AI firms, Wall Street, and the broader AI community, no one was more surprised at the success of DeepSeek than Chinese leaders in Zhongnanhai. DeepSeek had come to the attention of Xi Jinping before January 20, but no one in the Chinese leadership anticipated the acceptance of DeepSeek’s open source models globally, and the detailed research papers supporting the models by the global open source and broader AI communities—let alone the reaction from the stock market. This despite my favorite of the many conspiracy theories around DeepSeek: that hedge fund guru Liang had engineered the whole thing and was short selling major US tech stocks, and that DeepSeek was a “Chinese psyop.”
Finally, we continue to see a ton of interest in DeepSeek and China’s AI sector development from the major US AI labs, who are almost certainly sharing their findings with the US government and intelligence community. As I documented previously, Anthropic’s characterization of DeepSeek’s technology capabilities as “clever” but “overblown,” and the admission that Anthropic had tested DeepSeek’s models and determined that they were “not a national security threat,” was a noteworthy development.
Now, we can also throw in more open source analysis on China’s AI sector from OpenAI, which alleges that there is something called “CCP headway in getting other governments around the world to adopt its AI.” Again, all the major companies developing AI in China are private sector, and the RAND report and quote above stresses this. So there is no such thing as “CCP AI”—particularly when most, but not all, Chinese AI model developers have moved to open source/weight approaches. When I am running a DeepSeek model on my RTX-4090 GPU at home, is this “CCP AI?” By this logic it is, as absurd as it sounds.
The latest OpenAI paper alleges that Zhipu AI, the only Chinese company to participate in the May 2024 Seoul AI Safety Summit and sign on to a declaration calling for companies to develop responsible scaling policies, is part of an effort by China “to lock Chinese systems and standards into emerging markets before US or European rivals can, while showcasing a ‘responsible, transparent and audit-ready’ Chinese AI alternative.” In Seoul, Zhipu signed on to the Frontier AI Safety Commitments alongside 15 other leading global AI developers, including OpenAI and Anthropic. It remains unclear which standards are at play here or would “lock in” Chinese systems. We are likely headed for a fully bifurcated AI stack, and many countries, including in the Global South, may choose one side or the other. Companies in those countries will be free to choose which stack they deploy, however. There will not be any “lock in” based on unspecified “standards.” All the activity by Zhipu cited in the paper sounds like an AI company trying to compete in a very competitive domestic and international market, rather than a sinister effort to get governments to adopt “CCP AI.”
The problem with these types of assessments is that they are trying to fit a complicated reality in China into the ‘authoritarian vs democratic AI’ narrative, epitomized by Anthropic CEO Dario Amodei’s claim in Machines of Loving Grace that,
The [democratic AI] coalition would aim to gain the support of more and more of the world, isolating our worst adversaries [mainly China] and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
It is important to understand that, for proponents of this worldview, any AI company in China must eventually be associated with or co-opted by the Chinese state, just as leading US AI labs are being coopted by the US state. In the battle of authoritarian AI vs. democratic AI, companies like DeepSeek cannot be neutral.
Expect more of the same and worse as the race to AGI heats up
As we increasingly enter a zero-sum race to AGI/ASI, expect more anonymous comments on DeepSeek, and maybe on Alibaba, Tencent, Baidu, Bytedance, and Minimax. OpenAI CEO Sam Altman has told the US government that he expects the firm to have something akin to AGI during the Trump administration. OpenAI’s board can determine when AGI has been achieved, and clearly well before this would be made public, OpenAI would inform the US government. The “Five Levels” internal research paper from OpenAI lays out a roadmap for getting to AGI.2 Whether Level 5 of OpenAI’s scale on getting to AGI would trigger a Decisive Strategic Advantage (DSA) for the US, and how the Trump administration would use this advantage against China, is becoming the core of the US China AI competition—though, as I have noted, the assumptions underlying the whole DSA argument remain largely unexamined.
Interestingly, there have been few attempts to link these leading, much larger AI developers (with significantly more links to the Chinese government) to support for “military and intelligence operations.” Mostly it has just been tiny DeepSeek. Expect this to change in the coming months. The addition of Tencent to the Defense Department’s 1260H list at the end of the Biden administration was likely just a hint of things to come. (Tencent could fight this, and would likely prevail much like Xiaomi, as any process here would be “deeply flawed.”) And then there is the entire area of embodied intelligence, humanoid robots, etc., that are seldom if at all addressed in discussions around the race to AGI. Before we get to AGI, for example, what is the US government going to do to tackle the issue of thousands of Chinese humanoid robots roaming US homes and cities, powered by open source Chinese AI models running on US cloud services platforms?
The dynamic outlined here carries huge and significant geopolitical risks, which I have documented here with Alvin Graylin, and in a later essay in the Cairo Review. The centrality of the US export controls to this dynamic, and the debate about the effectiveness of the controls, will continue. So too will the mounting collateral damage, with companies impacted by shortages of rare earths and magnets now counted among the casualties. Expect more FUD on DeepSeek once the firm’s V4 and R2 models are released. Liang is reportedly not happy yet with the performance of R2, and could really use some of those mythical 50,000 “Hoppers” to improve training…
Finally, as we get closer to AGI, research papers such as the internal OpenAI documents on ‘levels’ will become much more important. These levels are meant to provide a spectrum of capability rather than a binary AGI definition and appear to have been primarily shared internally and with investors. There is no mention in the paper of CBRN or cyber as far as I can tell, as (rightly) these projections are focused on commercial use cases for advanced AI models. At the same time, all the major labs have implemented responsible scaling policies which seek to put up guardrails around CBRN, cyber, and autonomy. So how will the major US AI labs know when their own models could provide a DSA on cyber operations—one scenario raised by former Biden White House AI lead Ben Buchanan in an interview with Ezra Klein? And how will the Chinese government know when DeepSeek or another leading Chinese AI company has developed a model capable of taking down US critical infrastructure? And who in Beijing will assess the progress of proprietary models from the leading US AI labs, and how exactly?
Also, significantly, the Biden era US NSM on AI, which has not been rescinded yet by the Trump administration, also notes that the US government pledges not to develop some AI capabilities that are national security-related, though these are very limited and do not mention areas such as cyber. So how will we—meaning the AI safety community, OpenAI, Anthropic, DeepSeek, the US government, the Chinese government, UNESCO, etc.—know when capabilities have reached a point where unleashing them (without guardrails?) would constitute a DSA for a particular government? This is far from clear. There is a classified addendum to the AI NSM which “…addresses additional sensitive national security issues, including countering adversary use of AI that poses risks to United States national security.” But how, exactly, will the U.S. government—including the intelligence community—or organizations closely tied to the private sector, such as RAND, which are examining Chinese models (possibly in collaboration with leading U.S. AI labs like Anthropic—see Jack Clark’s assertion that they tested DeepSeek’s models and found them to not constitute a national security threat), determine when “adversary use of AI” truly poses a risk to US national security? This is even less clear, and will become exceedingly murky as both sides move advanced AI development ever deeper behind closed doors and deep within secure AI datacenters—with all the attendant risks.
The judge's core argument was that the Pentagon had not provided any specific evidence or findings linking Xiaomi directly to China's military or defense-industrial activities. In the 2021 decision, Judge Rudolph Contreras emphasized that the Department of Defense failed to develop a legally sufficient basis for listing Xiaomi, describing the process as “deeply flawed”
Specifically, the judge pointed out that the DOD:
Did not demonstrate that Xiaomi "contributes to the Chinese defense industrial base"—a statutory requirement under Section 1260H.
Relied on insufficient evidence, lacking identification of any contracts, deliveries, or services between Xiaomi and the People's Liberation Army.
Based its listing on broad associations or speculative “military-civil fusion” links, which were deemed inadequate under the Administrative Procedure Act
In short, without concrete documentation that Xiaomi conducted business with or materially supported the PLA, the judge ruled the Pentagon had not met its legal burden—and thus the listing was overturned.
OpenAI's internal framework outlines five distinct levels to chart AI progress toward AGI. While the full internal paper hasn't been officially released, multiple sources summarize the levels as follows:
Level 1 – Conversational AI
AI systems proficient in natural language communication (e.g., chatbots like GPT‑4o’s language mode)Level 2 – Reasoners
Models capable of human-level problem-solving and consistent reasoningLevel 3 – Agents
AI systems that can take actions in response to user requests—acting semi-autonomouslyLevel 4 – Innovators
Systems that can aid in invention, generating new ideas and novel solutions beyond straightforward tasksLevel 5 – Organizational AI (sometimes called “superhuman” or “expert agents”)
Capable of performing the tasks of an entire organization—coordinating, planning, and executing complex multi-step processes across domains
In short, the ladder ascends from fluent conversation (Level 1) → consistent reasoning (2) → independent action (3) → creative innovation (4) → full-scale orchestration at organizational level (5).