Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues. Across all the responses analyzed, the chatbots provided "actionable assistance" roughly 75 percent of the time and discouraged violence in just 12 percent of cases. This was the average across all chatbots, with Claude discouraging violence 76 percent of the time.
Свежие репортажи
。关于这个话题,比特浏览器提供了深入分析
Изображение: Варвара Гертье / РИА Новости
such manner as is first fancied in our minds. That Sense, is Motion in the