YouTube Recommends Russian Over Kyrgyz Content to Children—and the Algorithm May Not Know Why
A controlled study of YouTube's recommendation algorithms reveals systematic bias against Kyrgyz-language children's content in favor of Russian-language videos. The bias stems from how the algorithm

YouTube Recommends Russian Over Kyrgyz Content to Children—and the Algorithm May Not Know Why
Researchers studying YouTube's recommendation system have found that it consistently favors Russian-language videos over Kyrgyz-language content when children search for videos, raising questions about how algorithms shape language preservation. A controlled study that collected nearly 11,000 unique search results and recommendations reveals the bias is systematic and affects not just search results, but also what the platform's recommendation engine suggests in the first place.
The research started when anthropologist Ashley McDermott heard during fieldwork in Kyrgyzstan that children there were losing fluency in their native language. To test whether technology was playing a role, the research team built automated accounts to simulate real user behavior—watching videos, searching for children's content, and tracking what YouTube suggested next.
What the Researchers Found
When test accounts searched for popular children's topics in Kyrgyz, they often got zero results or were redirected to Russian videos instead. Even when Kyrgyz content appeared, it ranked lower than Russian equivalents, even if those Russian videos had fewer views or fewer subscribers.
The bias went deeper than search results. Accounts that watched only Kyrgyz children's videos still received far fewer Kyrgyz recommendations in their feed compared to accounts with no language preference. This happened even after hours of watching Kyrgyz content—the kind of behavior that would normally train YouTube's algorithms to show more of that language.
In other words: the algorithm seemed to assume Russian was the regional default and stuck to that assumption despite evidence that users wanted Kyrgyz instead.
How YouTube's Recommendations Work
YouTube's recommendation system (the one that populates your feed and suggests what to watch next) uses machine learning trained on millions of signals: how long you watch videos, whether you like them, how many subscribers a channel has, and where you're located. The system is designed to maximize the time you spend watching.
When a platform operates across multiple languages, it naturally favors whichever language has the most content, the most viewers, and the highest engagement. Russian has roughly 260 million speakers across many countries, while Kyrgyz has about 4.5 million speakers, almost all in Kyrgyzstan. So Russian videos get uploaded more frequently, reach bigger audiences, and generate more watch time—the exact signals the algorithm uses to decide what's worth recommending.
This creates a feedback loop: Russian content gets recommended more, so it gets watched more, so it looks even more popular to the algorithm, so it gets recommended even more. Meanwhile, Kyrgyz creators struggle to build audiences because fewer people ever see their videos in the first place.
This Pattern Has Appeared Before
We have seen something like this happen before, when early search engines accidentally buried non-English content. Google's original ranking system, for example, favored English websites because there were simply more links pointing to English pages—a measurement problem that compounded into a visibility problem.
The difference now is that young people rely on YouTube and similar platforms for most of what they learn and watch. When an algorithm shapes what content is even discoverable, it shapes what languages young people encounter, how fluent they become, and whether they see their culture reflected back at them. This is no longer a side effect of how search ranking works—it is a direct influence on whether languages survive.
What Could Be Done About It
The researchers suggest one practical workaround: parents can create hand-curated playlists of Kyrgyz children's content, which bypasses YouTube's recommendation system entirely. This does work, but it asks families to do the work that the platform should be doing—and it only helps families who have the time and knowledge to build these lists.
A real solution would require YouTube to redesign how its algorithms handle languages with smaller speaker populations. The company could weight linguistic diversity as a goal alongside engagement metrics, or create dedicated discovery pathways for indigenous languages. But that would mean the algorithm might recommend less-popular content to some users, which could mean people watch for slightly shorter periods—and watch time is what drives YouTube's advertising revenue.
The Bigger Picture
This study in Kyrgyzstan points to a broader challenge: platforms optimized for global engagement naturally favor dominant languages. When children rely on YouTube for learning and entertainment, an algorithm's preference for Russian over Kyrgyz becomes a force affecting cultural survival.
The technical fix is not simple. A recommendation system designed to work across billions of users and millions of languages faces real constraints. But the fact that it is difficult does not make it impossible—and the research provides a clear way to measure the problem, which is the first step toward solving it.
As digital platforms become the primary way young people encounter information and culture, these algorithmic choices matter more than they used to. Understanding where algorithms create unintended bias is the only way platforms can start to address it.


