Programming as the Connector Between Humans and AI
Programming serves as a bridge for human interaction with AI. In the commercialization of various generative AI applications, programming stands out due to its highly structured nature, verifiable outcomes, and strong user payment capabilities, making it an ideal sector for commercial deployment. For a long time, Anthropic’s Claude has dominated this market with its powerful programming capabilities.
However, on September 5, 2025, Anthropic announced a ban on providing Claude services to companies or subsidiaries with more than 50% Chinese capital, citing these countries as hostile. This decision directly impacts certain Chinese-funded subsidiaries in Singapore and Hong Kong.
In light of this ban, many domestic AI model companies have recognized a significant opportunity for domestic alternatives. At the recent Alibaba Yunqi Conference, Alibaba unveiled seven large models, particularly highlighting the upgraded flagship model Qwen3-Max, which has improved its capabilities and currently ranks third in programming ability on LMArena.
Alibaba’s technical experts elaborated on their strategic judgment regarding AI programming: due to the verifiable nature of code, it is seen as a field that can achieve general artificial intelligence (AGI) first. Consequently, Alibaba’s ultimate goal is not merely to create a “code assistant” but to develop an “autonomous programming agent” that can independently complete complex tasks like a human engineer.
The smaller players, often referred to as the “Six Little Dragons,” have also found a rare opportunity for commercialization, with Kimi being a prime example. On the same day the ban was announced, Kimi K2 released an update to enhance performance and subsequently announced a limited-time half-price for its high-speed API.
After Anthropic’s ban on China, Kimi K2, according to the latest ratings from the globally recognized AI programming platform Roo Code, is not only the highest-ranked open-source model but also the fastest and cheapest among the top ten models.
Roo Code rated K2 as the highest-scoring open-source model.
Competitors like SenseTime and JD Cloud are also keenly watching the situation, quickly launching developer migration plans. Zhiyuan, another member of the Six Little Dragons, was quick to offer a one-click migration service and later introduced the GLM Coding Max version tailored for high-frequency developers on September 22, along with promotional activities.
The Starting Gun for Domestic Alternatives
For many domestic AI companies, Anthropic’s ban serves as a starting gun, igniting a race to seize market opportunities. On the day of the ban, Kimi K2 released updates that improved compatibility, output speed, programming capabilities, and context length. In the following days, Kimi announced a limited-time half-price for its high-speed API, clearly aiming to attract Claude users.
Other domestic manufacturers quickly followed suit:
- Zhiyuan AI announced a one-click migration service for Claude API users and offered new users 20 million tokens for free. They also created a monthly subscription package for developers using GLM-4.5 coding, priced at only one-seventh of Claude’s cost.
- SenseTime’s “Riri New SenseNova” provided rapid switching services for former Claude users, along with a 50 million token experience package and dedicated consultants and training for API migration.
- JD Cloud officially stated it would integrate Claude Code into its JoyBuilder large model service and provide intelligent programming solutions with JoyCode + JoyBuilder to help developers transition smoothly.
In contrast, traditional internet giants have shown a somewhat ambiguous attitude towards replacing Claude with Qwen. An Alibaba Cloud employee mentioned to Observer Network that “the domestic usage of Claude is low, and there are currently no plans for this.”
Besides feeling that the market is too small, another possibility for the giants’ low-profile handling could be that they have generally used Claude technology in their overseas deployments.
ByteDance’s AI code editor Trae, which has a domestic and international version similar to Douyin and TikTok, has already discontinued Claude in its domestic version, but the international version still promotes Claude as a selling point, now facing the risk of technology supply disruption.
The Singapore entity operating Trae, ByteDance’s subsidiary SPRING, has encountered issues as it provides OpenAI’s GPT and Anthropic’s Claude models to users through its Singapore entity. Despite navigating geopolitical and data review risks through its corporate structure, Trae has received numerous refund inquiries following the ban announcement.
In response, Trae’s administrator stated on the official Discord that Claude is still available and urged users “not to consider refunds for now.”
Other companies like Alibaba’s Qcoder and Tencent’s CodeBuddy have also promoted the use of Claude in their overseas offerings, now facing the risk of technology supply disruption.
Anthropic’s statement explicitly targets entities with over 51% Chinese capital, but there is no unified consensus on how to determine the 51% Chinese identity. The ambiguity surrounding Claude’s monopoly and the time costs and legal uncertainties involved in seeking rights protection loom over all Chinese-funded enterprises.
This means that Anthropic’s ban not only provides an opportunity for domestic large model companies to showcase their capabilities but also prompts many domestic developers, overseas Chinese-funded enterprises, and even foreign developers to reassess their technological routes.
A Counterattack from Kimi
Earlier this year, Kimi faced a challenging period when it lost its spotlight to DeepSeek. However, more than six months later, despite significantly reducing its investment, Kimi managed to maintain its user base amidst intense competition from DeepSeek, internet giants, and smaller players.
This resilience can be attributed to the release of Kimi K2 in July, which marked a profound transformation in its path.
In March, prominent investor Zhu Xiaohu publicly questioned Kimi’s commercial viability, stating, “Yang Zhilin can do research, but I don’t know how he will commercialize it. Kimi is leading in domestic large models, but in the long run, it must prove its value, at least to catch up with American open-source models. If it can surpass open-source, the team will truly have value.”
This public skepticism from a top investor cast a significant shadow over Kimi’s future and accurately predicted the challenges it would need to overcome in the following months.
In addition to the challenges posed by DeepSeek, the AI landscape in 2025 has become increasingly competitive, with Tencent entering the fray and leveraging its WeChat ecosystem, Alibaba embedding the Qwen model into Quark and DingTalk, and ByteDance’s Doubao maintaining stability through Douyin traffic and aggressive user acquisition.
This year, the frequency of product releases among AI companies has noticeably increased, with Kunlun Wanwei even launching six models within a week.
In contrast to its peers, Kimi has adopted a more low-key approach. This silence was broken in July when Kimi unexpectedly launched its latest model, K2.
K2 is a model with 1 trillion parameters and 384 experts, making it the world’s first open-source model to reach this parameter count. Its design significantly lowers deployment barriers, focusing on coding and general intelligence capabilities, fully open-source, and compatible with OpenAI and Anthropic API formats, clearly targeting Claude.
In terms of performance, K2 achieved state-of-the-art results among open-source models and matched the levels of top closed-source models, establishing itself in the first tier of the overall large model competition.
In practical applications, K2 has also delivered satisfactory results for users and industry professionals.
Several programmers and AI practitioners have expressed to Observer Network that from the 2025 perspective, there are virtually only two choices for AI coding products: either use Claude 3.7/4.0 from Anthropic or Google’s Gemini 2.5 Pro/Gemini Cli, while K2 has already matched these performances and even outperformed them in certain cases.
Even though Kimi is not a reasoning model, it has demonstrated its improved capabilities on common sense problems that once stumped large models, providing correct answers to questions like which is larger, 6.9 or 6.11, or how many ‘r’s are in ‘strawberry’, as well as generating 183 instances of the character ‘哈’.
Just months after its release, Kimi K2 has effectively answered Zhu Xiaohu’s three “soul-searching questions”: in terms of technology, K2 has not only “caught up” but even “surpassed” American open-source models in several dimensions, as evidenced by its top ranking on Roo Code; in terms of commercialization, Kimi has shifted from a vague C-end tipping model to a clearer commercial path focused on high-value, long-chain tasks.
The launch of K2 and its commitment to open-source mark a fundamental shift in Kimi’s corporate strategy.
In November last year, Kimi’s founder Yang Zhilin explained why Kimi chose to invest heavily in marketing. He believed that Kimi’s core task was to ensure retention and growth since technology would continue to iterate while API prices would fluctuate, but customer acquisition costs would only rise. By investing early to solve customer acquisition issues, Kimi could not only build user loyalty but also leverage user data to create a positive feedback loop.
From a purely competitive perspective in the chatbot space, Yang Zhilin’s strategy seemed sound. However, with the emergence of DeepSeek at the end of January this year, the entire market landscape was rapidly disrupted.
As the previous model of buying users, having them use the model, and then training the model became unsustainable, Kimi decisively pivoted towards open-source, embarking on a path of ecosystem building.
Regarding the rationale for choosing open-source, a Kimi researcher candidly stated, “Open-source is primarily about gaining reputation. If it were a closed-source model, it would not have the current level of attention and discussion.”
However, the true purpose of open-sourcing extends beyond this; it allows for leveraging community power to enhance the technical ecosystem, and open-source implies higher technical standards, compelling us to produce better models, aligning with the goal of AGI.
Once a model is open-sourced, it signifies that the model must demonstrate sufficiently general capabilities, enabling third parties to easily verify and replicate it, rather than relying on so-called special tuning to embellish scores.
This strategic shift also carries commercial considerations.
Currently, the three most easily commercializable directions for AI are ChatBot subscriptions, AI-generated images/videos, and AI programming.
For Chinese users, it is almost inconceivable to expect widespread payment for AI chat, as chatbots serve merely as a traffic and data entry point for AI.
Kimi has attempted commercialization in the past; in May 2024, it launched a tipping feature ranging from 5.2 to 399 yuan. Recently, there have been rumors that Kimi will soon introduce a membership subscription for its Agent feature.
Former tipping users showcase Kimi membership benefits.
In terms of AI-generated images/videos, Kimi has not updated after launching two gray test products, indicating that this is not a strategic focus. Therefore, emphasizing programming is a choice that leverages strengths and has a viable business model.
Tsinghua University graduate and OpenAI researcher Yao Shunyu recently expressed optimism about this sector: “I have been thinking since 2022: why is no one working on Coding Agents, which is clearly very important?”
He stated, “Coding is the best tool for connecting humans and AI, just like a hand. With a hand, one can pick up tools like hammers and scissors to accomplish various tasks. Hence, models are now focusing on coding.”
Yang Zhilin, also from Tsinghua, although not publicly stated, shows a consistent strategic thought process through past statements and experiences.
While everyone in 2023 is pursuing general capabilities and aiming for a broad scope, Yang Zhilin has clearly mentioned in interviews that “we prioritize 200,000 words of context over competing on general rankings.”
The design and development philosophy of the Kimi K2 model aligns closely with the direction of Coding Agents.
Another core advantage of entering this sector is occupying the ecological niche of domestic alternatives, positioning itself as “China’s Anthropic” to capture the market left by Claude.
As a purely domestic model, Kimi faces no compliance or filing issues. Being an early player in this sector, if it can establish an industry ecosystem, even if other open-source models enter the fray, the sunk costs associated with the ecosystem will serve as Kimi’s potential moat.
Not Just Kimi: The Code Gamble of Giants and Unicorns
Of course, Kimi is not the only player targeting the strategic high ground of coding. In fact, this has become a battleground for leading domestic large model manufacturers.
Take Zhiyuan as an example; its approach is particularly noteworthy. As an AI company originating from Tsinghua with a strong national team background, expectations may lean towards a relatively conservative route.
However, Zhiyuan’s posture in market competition has been unexpectedly aggressive. Its latest “GLM Coding Plan” aims to build an extremely open and compatible coding ecosystem. In addition to supporting Claude Code, it has added compatibility with various mainstream AI programming tools such as Roo Code, Cline, and Kilo Code, covering all major IDE environments.
This “broad net” platform strategy, combined with a minimum monthly payment of 20 yuan and promotional incentives, has sparked an intense price war in the large model sector.
This seemingly “cost-agnostic” investment clearly indicates Zhiyuan’s ambition: it aims not only to match international top models in technology but also to capture developer mindshare and market share through the most grounded approach, regardless of the cost.
Low-cost customer acquisition does not imply that Zhiyuan lags in technical strength; rather, it reflects that domestic large model technology has generally reached a globally leading level. GLM-4.5’s ability to solve practical problems at one-seventh the price is already close to that of Claude Sonnet 4.
Under the CC-Bench evaluation system, domestic open-source models are nearing parity with top models.
In multiple open-source evaluations following the release of GLM-4.5, it has maintained competitive parity with international mainstream models, ranking second in the WebDev Arena alongside leading global models, and outperforming Gemini-2.5-Pro and GPT-4.1 in SWE-bench Verified performance. In CC-bench evaluations, Zhiyuan, DeepSeek, and Kimi K2 models have had mixed results, with Qwen-Coder holding a certain advantage.
Notably, this does not imply that Alibaba is falling behind in the AI programming field; it merely indicates that domestic competition in this sector is intensifying.
On September 24, Alibaba made a high-profile announcement at the Yunqi Conference, unveiling significant upgrades to Qwen3-Coder.
For a giant like Alibaba, the AI programming sector, which may seem vertical, has garnered unwavering strategic investment. The fundamental reason is that Alibaba understands that developers are the cornerstone and lifeblood of its cloud business.
During the technical sharing at the Yunqi Conference, algorithm scientists from Tongyi Laboratory further elaborated on their profound understanding of AI programming: they believe that code is the core tool for human interaction with the digital world, and AI programming, due to its verifiable nature, will be the first field to achieve general artificial intelligence (AGI). Based on this judgment, Alibaba clearly divides the evolution of AI programming into three stages: from initial code completion to the current code assistant, ultimately advancing towards the ultimate goal of creating an “autonomous programming agent” capable of independently completing complex tasks like a human engineer.
To achieve this ultimate goal, Alibaba’s technical route is exceptionally clear: first, inject vast amounts of high-density code data (up to 75 trillion tokens) into the model during the pre-training stage to provide strong code “memory”; second, treat ultra-long context as key to ensure the model can handle entire code repositories; finally, through reinforcement learning, mimic human learning from debugging errors to continuously enhance the model’s limits. Behind all this is Alibaba’s massive training infrastructure, built on Alibaba Cloud, capable of instantly launching thousands of virtual environments, providing a “Colosseum” for the evolution of AI agents.
Thus, the upgrades to Qwen3-Coder—faster inference, higher security, and a 256K context window—are all reflections of this grand strategy. Its open-source version saw a 1474% surge in usage on the OpenRouter platform, further validating the success of this strategy.
Similarly, the Qwen3-Max, released at the Yunqi Conference, as Alibaba’s latest closed-source flagship model, achieved high scores in real-world problem-solving tests like SWE-Bench. This clearly demonstrates Alibaba’s “combination punch”: using top open-source models to attract the broadest developer base while employing the strongest closed-source models to serve the highest-value enterprise customers, ultimately transforming investments in AI programming into a growth engine for its entire cloud empire.
Whether it’s the rigid demand for programmers to simplify repetitive tasks or the impending surge in programming needs in the era of low-code or no-code, all point to the same future: programming is becoming the “universal language” of the AI era. Positioning in the programming sector is not merely about choosing a vertical; it is about becoming the infrastructure and operating system for the next generation of AI-native applications—a strategic high ground where the winner takes all.
A Historic Opportunity, but Also a Historic Challenge
Anthropic’s ban inadvertently creates a historic opportunity for the development of AI technology in China.
However, winning this counteroffensive may just be the first step in a long journey. The road ahead for all players is not smooth but rather filled with the more brutal “scorched earth war.”
The first challenge is the infrastructure gap between “stunning and stable,” which is a lifeline in the B2B market. In the early days of Kimi K2’s launch, a surge in traffic caused server congestion and delays. While this may be tolerable for C-end users, it poses a fatal flaw for enterprise-level services. In the 2025 AI competition, model performance and stability are equally important. Competitors—whether financially robust internet giants or equally fierce players like Zhiyuan—are closely watching. Every company must prove that it can not only produce “bombshells” but also provide reliable infrastructure services akin to utilities, which tests the limits of supply chains, engineering capabilities, and massive capital.
The second challenge is that the commercialization path after open-sourcing is far more perilous than imagined. Companies like Kimi, Zhiyuan, and DeepSeek, representing the open-source route players, have earned their reputation and entry ticket to the ecosystem, but they have also made their sharpest weapons public.
For commercialization, this implies a brutal “self-inflicted battle.” The API services officially provided by these open-source model companies must contend not only with direct competitors engaged in a price war but also face a more formidable enemy—cloud companies that “modify” and package their open-source models at lower prices. Alibaba Cloud and Tencent Cloud can easily use any popular open-source model as a lead product at a loss to capture market share, effectively “cutting off” customers.
Notably, after the release of Kimi K2, major AI and cloud platforms worldwide have deployed this model, with Perplexity’s CEO stating on social media that the company might utilize K2 for post-training due to its excellent performance.
Thus, all open-source players must establish a sufficiently deep moat around their official APIs and Agent functions—whether through extreme performance optimization, unique features, or a robust solution ecosystem—faster than all the “free riders.” Otherwise, the model’s advancement may ultimately only serve to benefit others, leaving them trapped in the quagmire of “getting applause but not profits” in commercialization.
Unlike DeepSeek, which is backed by Huansuan Quant and can afford to burn cash, or the internet giants with strong cloud business support, most AI unicorn companies, from the perspective of self-sustainability or accountability to investors, cannot afford to endlessly invest in ecosystems. Finding a balance between technological faith and commercial reality is the most severe test facing these star startups.
Nevertheless, during the window period created by Claude’s ban, both AI unicorns and their investors, as well as developers and enterprises needing domestic compliant alternatives, can breathe a sigh of relief. The collective rise of domestic large models at least proves that Chinese AI has the capability to deliver “bombshell-level” products at critical moments. However, whether this guiding light can continue to burn depends on whether China’s AI players can win a more challenging war concerning stability, ecosystem, and business models beyond technology.
On this road filled with both opportunities and challenges, Kimi has gained the upper hand, but Alibaba’s relentless investment and Zhiyuan’s relentless pursuit cannot be underestimated. The “throne” of Chinese AI remains vacant, and the true king will emerge from the brutal “Ironman Triathlon” of technology, ecosystem, and business.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.