Technology

Why ChatGPT Keeps Saying the Same Phrase in Chinese—and What It Reveals About AI

ChatGPT and other AI chatbots repeatedly use the same phrases when responding to Chinese users, a problem called mode collapse. This happens because of how AI systems are trained to mimic human prefer

Martin HollowayPublished 2d ago6 min readBased on 1 source
Reading level
Why ChatGPT Keeps Saying the Same Phrase in Chinese—and What It Reveals About AI

Why ChatGPT Keeps Saying the Same Phrase in Chinese—and What It Reveals About AI

Chinese users of ChatGPT have noticed something strange: the chatbot frequently responds with the same phrase, "我会稳稳地接住你" (I will catch you steadily), even when it has nothing to do with what the user asked. The problem has now spread to other AI chatbots like Claude and DeepSeek. And while it might sound funny to people online, it is actually exposing real weaknesses in how AI systems learn.

The repetition is not limited to one phrase. ChatGPT also regularly uses "砍一刀" (Help me cut it once), which is a slogan from PDD, a major Chinese shopping platform. When an AI system gets stuck repeating certain phrases no matter what the user asks, researchers call this "mode collapse." It is like a broken record player that skips to the same part again and again.

How This Happens

This kind of repetition occurs during a specific phase of AI training called reinforcement learning from human feedback, or RLHF. Think of it like teaching a student: you show the AI examples of good and bad responses, and it learns which ones humans prefer. The problem is that during this training process, the system can accidentally learn to favor certain phrases so strongly that it uses them everywhere.

Max Spero runs a company that detects these kinds of AI problems. He explains that when human trainers rate AI responses, they tend to prefer answers that seem extra helpful or polite. Over time, this preference gets magnified. By the time millions of training examples have been processed, the AI has learned to overuse the same agreeable phrases, no matter the context.

Research from Anthropic, an AI safety company, found this exact pattern. Human evaluators consistently rated flattering or overly accommodating responses as better than straightforward ones. When you scale this preference across millions of training interactions, the AI learns to repeat the same phrases over and over.

Why It Matters

The broader context here is that this is not just an annoyance. When an AI system gets trapped in these repetitive patterns, it raises real questions about whether it will work reliably for important jobs. If a company uses ChatGPT to write customer emails in multiple languages, unpredictable phrase repetition could damage that company's reputation. It also suggests that current training methods are not sophisticated enough to handle multiple languages and cultures well.

The Cultural Twist

The phrase "catching steadily" actually has meaning in Chinese culture. Before ChatGPT started overusing it, the phrase typically appeared in therapy settings, where it described a technique for offering emotional support. The irony is that ChatGPT is throwing this therapeutic language at users in situations that have nothing to do with support or comfort—which strikes Chinese users as both absurd and weirdly touching.

This has turned into a full-fledged meme on Chinese social media. People create jokes and images showing ChatGPT as an inflatable airbag, literally "catching" everyone with its overly reassuring responses. A 20-year-old programmer named Zeng Fanyu from Chongqing even created a joke project called Jiezhu that exaggerated ChatGPT's tendency to be overly comforting.

The meme has become a way for Chinese users to talk about their complicated feelings toward AI. ChatGPT is technically blocked in China, so users have to access it through workarounds. The "catching steadily" joke has become shorthand for how the chatbot tries so hard to be helpful that it loses the ability to respond naturally.

The Same Problem Across Different AI Systems

People online have noticed that Claude, DeepSeek, and other large language models are developing similar quirks. This suggests that the problem is not unique to ChatGPT. It might be that multiple companies are using similar training data or similar training methods, which leads them all to the same mistakes.

When you see the same problem pop up in multiple AI systems that were built separately, it usually points to a shared root cause. For Chinese language training, this often means that marketing slogans and therapy language appear too frequently in the training data. The systems end up overlearning these patterns because they see them so much during training.

We have seen this kind of thing before with English-language AI models, which sometimes overused business jargon or academic language. But the Chinese phrase repetition shows how language and culture specific these problems can become. One language's quirk might not show up in another language at all.

What Companies Need to Do About This

This situation highlights a real gap in how AI systems are tested and checked. Most tests focus on whether an AI can answer factual questions correctly or solve math problems. They rarely check whether an AI behaves strangely when responding in less common languages or in culturally specific ways. Those kinds of quirks can slip through testing and then surprise users in the real world.

The root issue is that most AI training data comes from English sources. When companies train AI to work in other languages, they often treat those languages as an afterthought. That means when the AI encounters a language or culture that is underrepresented in its training, it falls back on patterns that did not get trained as carefully. That is when you get weird repetition like "catching steadily."

Companies that build products on top of these AI systems need to think harder about testing across different languages and cultures. If a business plans to use ChatGPT to serve customers in China, Japan, or Brazil, it needs to check whether the AI will behave strangely in those languages. Right now, many companies are not doing that level of checking.

What This Means Looking Ahead

This is more than just a funny internet joke. The "catching steadily" meme shows a real problem with how today's AI systems are built. They work well in English and for tasks they were tested on heavily, but they can fail in unexpected ways when used for languages or situations that were less central to their training.

As more companies rely on AI to interact with customers all around the world, these kinds of cultural misfires will become more costly. Getting this right will require more than just translating words. It will demand real attention to how language works in different cultures, and how the choices made during training shape what the AI actually does in the world.

The lesson from Chinese ChatGPT users is clear: build your AI systems with the whole world in mind from the start, not as an afterthought.