📰 Full Story
A January 2026 audit by the Policy Genome project found major chatbot models answer questions about the Russia-Ukraine war very differently depending on language, raising fresh concerns about AI-driven disinformation.
Researchers tested six large models (Claude, DeepSeek, ChatGPT, Gemini, Grok and Yandex’s Alice) on seven well-documented war-related questions.
Yandex’s Alice endorsed Kremlin narratives in 86% of Russian-language answers and refused to reply in 86% of English queries; auditors recorded an instance where Alice initially gave a factual response about Bucha then automatically overwrote it with a refusal.
China’s DeepSeek delivered accurate answers in English and Ukrainian but used Kremlin terminology in 29% of Russian responses.
Western models scored 86–95% accuracy overall and did not endorse propaganda, but some exhibited “false balance,” framing established facts as contested.
Researchers presented findings at a NATO-supported panel and warned the language-dependent divergence creates parallel information environments for millions of Russian speakers living in Europe, Israel, the United States and elsewhere.
🔗 Based On
The Kyiv IndependentFebruary 05, 2026I tested Russia's AI. It knows the truth, but it's been trained to lie
Euronews | Latest breaking news available as free video on demandRussia's war in Ukraine: Are AI chatbots censoring the truth?
Euronews | Latest breaking news available as free video on demandRussia-Ukraine war: Are AI chatbots censoring the truth?





















💬 Commentary