Russia May Introduce Labeling for AI-Generated Content
The Russian State Duma is currently discussing the possibility of introducing a law that would require labeling of content generated by neural networks. This information was shared by Anton Nemkin, a member of the State Duma Committee on Information Policy.
According to Nemkin, there is already so much information online created by bots, algorithms, and other tools that it is becoming increasingly difficult to distinguish between real and fabricated content. He added, “Looking at the long-term perspective, the development of AI services without proper oversight poses the risk of massive amounts of unchecked texts, as well as completely fabricated facts, figures, and data.”
Nemkin emphasized that it is important for people to understand what they are dealing with and to distinguish between real and artificial content. He compared this to advertising, where users can easily be misled or manipulated, and said it is the government’s responsibility to prevent this from happening.
“In my opinion, labeling should be done using graphic or watermark signs. The main thing is that such labeling should be unobtrusive, yet clear and noticeable to any user, so they understand what kind of content they are viewing and can analyze it more carefully,” Nemkin stressed.
Since it is not yet clear what kind of expertise would allow for determining the degree of machine and human involvement in a single text, Russian AI services should automatically label any generated texts. Nemkin suggested that Russian companies developing generative neural network technologies—primarily Sber and Yandex—should be the first to consider such digital labeling technologies for images.
“However, considering that some AI-generated content is created for destructive purposes, it is unreasonable to expect its creators to label it themselves. Therefore, it would make sense to give new powers to agencies like Roskomnadzor, whose specialists could conduct examinations to identify such content if its creators try to avoid labeling. Once identified, this content should be forcibly labeled on the platforms where it is distributed, or blocked if it is being spread for illegal purposes,” Nemkin proposed.
The idea of labeling AI-generated content, similar to what Meta (recognized as an extremist organization and banned in Russia), TikTok, and YouTube do, was first raised in Russia in February of this year. Since then, the idea has been revisited several times, including by Anton Gorelkin, co-chairman of the State Duma IT Committee.