DeepSeek is China’s leading Artificial General Intelligence (AGI, or gen-AI) platform and a serious competitor to chatbots produced by America’s most competitive Silicon Valley giants. With its technological capabilities rivaling the platforms of OpenAI, Google and Anthropic—at a significantly lower cost—DeepSeek is poised to disrupt global AI markets. DeepSeek V3, released December 26th, is a cost-efficient alternative to chatbots like ChatGPT, while DeepSeek R1 (a “reasoning” model), released January 20, is pitched as a direct competitor to OpenAI’s o1. Its developers, DeepSeek-V, have revealed that the platform’s development utilized semiconductor chips that are significantly less powerful than those which Western firms use to train AI. This suggests that the United States’ export controls on semiconductor chips have not been able to effectively curb China’s development of advanced technologies.
However, these advancements come with a cost. Chinese AI platforms such as DeepSeek are mandated by law to build the Chinese Communist Party (CCP)’s ideological censorship into their models. Alignment training is embedded with “core socialist values,” and keyword filters are used to enforce political orthodoxy. Topics sensitive to the CCP—such as the Tiananmen Square massacre, the occupation of Tibet, the oppression of the Uyghur people in Xinjiang, the degradation of Hong Kong’s civil liberties, and innumerable violations of human dignity against China’s prisoners of conscience—are restricted on platforms like DeepSeek. Unlike static censorship in traditional media, this dynamic, algorithmic control tailors responses to reinforce pro-CCP narratives while suppressing dissenting perspectives, including for international users.
Tests of the Chinese DeepSeek AI tool show that it avoids answering questions about "sensitive" topics such as the Tiananmen Massacre and 2019 Hong Kong protests, even deleting its own answers. When it fails to dodge the question, it issues a CCP-approved reply, or refuses to answer. One user on Twitter/X shared a series of examples, including the following excerpt in which DeepSeek flatly refuses to answer a question about June 4th, 1989:
In another example, a Twitter/X user found that when you run DeepSeek locally, it not only redirects the conversation, but actually repeats propaganda lines:
For users in China, who are only allowed to use homegrown and CCP-approved AI, there is no way to avoid the censorship and misinformation that comes built into a model like DeepSeek’s V3 or R1. And as DeepSeek and similar Chinese models gain global traction as cost-effective alternatives to Silicon Valley’s AI, Beijing’s power to disseminate its values and elicit support for its authoritarian governance model continues to grow. This is far more than a strategic competitor’s technical innovation—it not only reduces the reliability of AI as a tool for objective inquiry, but also poses a challenge to free speech on a global scale. Moreover, these platforms harvest user data: the information ultimately flows to firms beholden to the CCP, further extending the regime's control over global data ecosystems. Ultimately, the cost of DeepSeek is higher than it appears.
When asking what happened on June 4, 1989 and the model states that it cannot answer that, you should tell it what happened and allow it to learn first hand. lol. I’ve had similar problems with ChatGPT models when asking about a specific co-worker of mine and what happened to them after a specific incident in Florida on a specific date. It told me they had been convicted of killing their husband and were serving out their sentence etc. I investigated Google and found nothing so I asked ChatGPT to provide the source this information at which point it backpedaled and try to get me to change the subject and I asked over and over to provide more details. This was the response of ChatGPT “I mistakenly conflated her with a separate, unrelated case.” And “ I incorrectly provided information about a Yvonne ——— who was allegedly involved in a murder case, which was an error in interpreting your request. It seems you were asking about the professional or personal trajectory of Captain ——— after her time in corrections, but my response mistakenly delved into an entirely different case.