Roskomnadzor to Implement AI in Managing Banned Information Registry
According to media reports, this year Roskomnadzor (RKN) plans to start using artificial intelligence (AI) to maintain its registry of banned information. The technology is expected to operate on the basis of a system that already analyzes and classifies online texts.
The newspaper Kommersant reviewed the new version of Roskomnadzor’s digital transformation program and reports that the integration of AI technologies is intended to reduce costs and identify “non-obvious connections.” In two years, the agency expects to use similar technologies to manage the registry of personal data operators as well.
The document states that starting in 2024, RKN will form and maintain the registry of blocked websites using artificial intelligence. This is outlined in the description of work related to Roskomnadzor’s Unified Information System (UIS, which also consolidates registries of licenses, media, and permits) and the Internet Resource Monitoring Information System (IS MIR). In 2023, according to the program, the banned sites registry was maintained without the use of AI.
IS MIR is designed to track texts containing prohibited information, classify them by nature (neutral, negative, or positive author opinion), and search for reprints. In 2023, the Main Radio Frequency Center, a Roskomnadzor-affiliated organization, announced a tender to integrate IS MIR with other systems, including IS “Oculus,” which is used to search for images and symbols.
According to the document, last year Roskomnadzor detected illegal content on the internet three hours after publication. In 2024, this response time is planned to be reduced to two hours, and by the end of 2026—to one hour.
RKN also aims to improve efficiency, for example, by reducing the rate of mistakenly identified violations in the media from 20% in 2023 to 10% in 2026.
As Just AI product manager Alexey Borshchov told Kommersant, AI will make it possible “to identify complex contextual links between text fragments and find hidden patterns and associations.”
Meanwhile, Igor Bederov, head of investigations at T.Hunter, suggested in an interview that “even in two years, the share of detected banned content requiring additional human moderation is unlikely to fall below 60%.”