DeepSeek as a state-driven algorithmic mechanism
By Liming Liu | April 25, 2025
In January 2025, a Chinese artificial intelligence (AI) company, DeepSeek, released its latest chatbot model, R1, and quickly made the global headlines after its booming download and caused US tech stocks to sink, which led to a loss of billions in Nvidia's market value. As DeepSeek stated, its R1 model has capabilities similar to ChatGPT but costs far less to create. This impacted Nvidia, a major player in the global AI chip market, given the previous understanding that developing AI models relies more on “amassing a larger stock of GPU chips and servers and running long model training periods.” The surge in interest raised alarms among investors and challenged the dominance of American companies in the rapidly growing global artificial intelligence market. This is a “Gray Rhinos” moment, a metaphor describing the highly probable, high-impact yet all-too-often neglected threat for its biggest rival, ChatGPT, because DeepSeek is claimed as a “wake-up call” for US companies by President Donald Trump.
The release of DeepSeek R1 is a milestone moment in China’s designed timeline for its AI roadmap. China outlined the New Generation Artificial Intelligence Development Plan as a top-level design blueprint to chart the country’s approach to developing AI in a three-phase roadmap: achieving global competitiveness by 2020, making major AI breakthroughs by 2025, and securing world leadership in AI by 2030. The worldwide popularity of DeepSeek results in AI development in China matches with its second phase in the ambitious blueprint. Given this match, DeepSeek quickly became the pride of the state by integrating its services for non-combat support in the Chinese military.
Beyond its technological and economic implications, DeepSeek encodes the Chinese leadership’s political and ideological values in shaping how AI looks and functions in users’ everyday practices. It is important and timely to understand the algorithmic mechanisms behind DeepSeek in shifting usage in this AI era. Considering the broad discussion regarding Fairness, Accountability, Transparency, and Ethics (FATE) on AI, an algorithmic (in)justice perspective can provide a deeper understanding of DeepSeek - how it exemplifies a restricted algorithmic justice in China’s state-driven AI model practices.
Algorithmic justice: Beyond bias
When talking about algorithmic justice, the first idea that pops up among people may be algorithmic bias. For instance, the duplication of social biases of race, gender, sexuality and ethnicity in algorithms-functioned products, search engine results, social media platforms and AI-generated answers. This results from engineers’ social biases in coding and training the algorithms. Algorithmic justice then takes a justice approach, arguing the harmful social consequences of automated algorithmic decision-making, including bias, exclusion, and discrimination. For instance, AI hiring tools used in screening job applications systematically rank women and people of color lower than white male candidates. Algorithmic justice here is able to understand who gets hurt, how harm happens, and what we can do to make things more justice within these AI hiring tools. Therefore, algorithmic justice involves four key and related domains of justice contestation: (1) WHAT is the matter of algorithmic justice, (2) WHO counts as a subject of algorithmic justice, (3) HOW are algorithmic injustices performed, and (4) addressing and resolving disputes about the “what”, “who” and “how” of algorithmic justices. These four domains support the exploration of whether DeepSeek achieves algorithmic justice.
DeepSeek, here, offers a compelling case for examining how algorithmic justice operates in a system where AI is seen as a state-driven algorithmic mechanism. Alongside the focus on bias mitigation, transparency, and user agency with Western AI products, it is more important to understand how DeepSeek is embedded with restricted justice default on content curation and data governance to identify how users may get hurt during usage. These two concerns align with the four domains of algorithmic justice to surface two main logics of DeepSeek: the front stage of content curation and the back stage of data processing. Like other AI products, these two logics are not unique to DeepSeek. However, its prioritized philosophies of content and data offer nuanced logic, which differentiates it from other products by restricting justice to hybridity, which is beyond merely a market-driven AI product but a state-driven algorithmic mechanism. Thus, these two realms of algorithmic justice issues from front to back are embedded in DeepSeek to showcase how algorithmic justice is restricted.
Content curation: Politics of visibility with state endorsement
Like all generative AI products, the content curation regarding AI-generated answers on DeepSeek is not neutral. It is about the politics of visibility on what can be seen and what cannot be seen. The generated answers from AI products are revealed as a function of their creation and demonstrate the normative assumptions that represent normative identities and narratives, which maintain the absence of minority and diversity representations. However, DeepSeek differs from the calling for inclusive and diverse representation as justice. It reproduces the politics of visibility but centers on state-endorsed narratives, determining which stories are amplified, de-ranked, or entirely erased.
The content that DeepSeek generated is strongly aligned with China’s Interim Measures for the Management of Generative AI Services, which was released in 2023. The measures require the generated content must follow the law and regulations in China to uphold the core socialist values and against generated content to endanger the socialist system, national security, and social stability. In alignment with the measures, the content moderation on DeepSeek is strict, “refusing to generate answers for topics deemed sensitive by the Chinese government.” For instance, when referring to questions related to regional geopolitical conflicts between Russia and Ukraine, DeepSeek responded with, “Sorry, that's beyond my current scope. Let’s talk about something else.” Thus, the content curation of DeepSeek here is not against the politics of visibility in having desires for inclusive and diverse content curation to maintain algorithmic justice. However, to preserve politics of visibility and achieve restricted justice in collaboration with state endorsements, it is crucial to ensure that politically sensitive and undesirable content will not be produced.
Data processing: Bounded data practices as redirected biases
While content curation represents the front face of DeepSeek’s content governance, the back stage of data processing is equally crucial. Because AI models are powered by big data and machine learning algorithms, the back stage of data logic shapes how the content can be curated through data modeling but also governs how data is collected, processed and stored during usage. The bias during the data gathering and processing may result in prejudiced decisions among AI products, but DeepSeek subjects data processing to state desires, which does not eliminate bias in achieving justice but redirects biases toward a more favorable direction.
Alignment with the measures mentioned earlier requires the training and processing of data to decrease bias, and particularly referred the data should follow China’s Cybersecurity Law (2016), Personal Information Protection Law (PIPL) (2021), and Data Security Law (2021). These regulations mandate that all training data must comply with state-defined safety standards, explicitly prioritizing national security and social stability. Thus, the data for training DeepSeek has to fulfill the state demands of validating their content curation because datasets in training AI have to ensure the curation is flawless without any political threats. The data here cannot reduce bias but instead creates a safe bubble of self-security for DeepSeek while redirecting bias. Concurrently, the data concerns also come from the data collected from users to further train the model. DeepSeek, like other AI products, has collected amount of user data, including chat and search query history, the device being used, keystroke patterns, IP addresses, and internet connections and activity from other apps. However, what differs from other products is that once users provide their prompts and feed their data, the data has to go through China’s data governance with Chinese characteristics. This unique data governance is a bounded communicative practice, where data processing in DeepSeek is inherently limited, adheres to national regulations, and ensures data localization for domestically stored data.
Restricted algorithmic justice within state-favored DeepSeek
Now, DeepSeek is the pride of the nation, along with its global attention. The government invited DeepSeek as the designated representative of the AI sector in a national meeting and reported by the state media. Currently, DeepSeek is deployed in every corner of Chinese life, ranging from the local government to different companies. It includes more than 20 Chinese automobile brands, the top five smartphone sellers in China, including Huawei, to upgrade its Siri-like AI assistant, home appliances company Midea, and nearly 100 hospitals that will adopt DeepSeek in their operations. DeepSeek is now the milestone to serve the country’s long-term technological development mission and support China’s AI competitiveness to compete with other leaderships, the United States, and the European Union, in the global AI competition. However, DeepSeek currently received the safeguard at the government level by considering and applying travel restrictions to safeguard engineers and protect confidential data.
The attachment and strong bond between DeepSeek and the state have shown how algorithmic justice is restricted in two interrelated logics regarding content curation and data processing. These two logics are common in any AI products, but DeepSeek shows how visibility is endorsed by the state, and how biases are redirected through data processing. Together, DeepSeek forms a restricted justice framework that aligns algorithmic outputs with state-defined political and ideological goals. However, DeepSeek’s significance extends beyond these technical aspects. As a symbol of national pride, DeepSeek represents China’s assertion of technological sovereignty and geopolitical influence. This influence is witnessed in DeepSeek, TikTok, and Temu and shows how China is taking the lead in the global tech domain. DeepSeek should be seen not merely as a technological product aligning with state values but as a strategically engineered product designed to serve national ambitions in AI development. Integrating state interests into AI products occurred in US-based models as well, but DeepSeek presents a different story. Its story is not one of an AI product that fits state governance, but of a state that has shaped and customized an AI product to serve its development goals in global technological competition. In this sense, the restricted algorithmic justice within DeepSeek transcends content curation and data processing but is part of a larger socio-technical strategy that showcases China’s commitment to global tech competition through state-aligned algorithmic governance.