AI to Conduct Psychological Analysis of Social Media Users Without Their Consent
The V.P. Ivannikov Institute for System Programming, which received a grant from the Analytical Center under the Government of Russia, has posted several tenders on the government procurement website focused on the study of artificial intelligence (AI).
The first contract, highlighted by Kommersant, involves research into the potential use of AI for psychological profiling of individuals based on their social media activity. According to the tender documents, the client notes that the spread of technologies using digital footprints for personality analysis “opens up broad opportunities for assessing personality and predicting behavior without conducting psychological testing that requires voluntary consent.”
This new technology is intended to help combat suicide groups, the involvement of citizens in terrorist activities, and extremism. The results of the competition will be announced on November 29. The contractor must complete the work by September 1, 2024. The initial contract value is 36 million rubles.
The Analytical Center’s press service explained that the technology is intended for assessing the condition of individuals making key government or corporate decisions, as well as “identifying manipulative control of large social groups, evaluating employees’ creative potential, and modeling their professional development.”
However, legal experts believe that using this technology may conflict with current personal data protection laws. According to Oleg Blinov, a lecturer at Moscow Digital School:
“When we post photos on social media, we expect them to be of interest to our friends and acquaintances. However, it turns out that a certain government agency will collect your data and assign you ‘personality scores,’ conducting ‘psychological diagnostics’ for unclear purposes.”
Alexandra Orekhovich, Director of Legal Initiatives at the Internet Initiatives Development Fund, emphasized that even if a person has not restricted access to information published on social networks, it is still protected by personal data law. Therefore, separate consent must be obtained for processing, specifying the terms and limitations for third-party data handling.
Other AI-Related Tenders
As mentioned above, this is not the only AI-related tender posted by the Ivannikov Institute on the government procurement website.
One lot, worth 16 million rubles, concerns research on “Methods for Detecting and Countering Adversarial Attacks on Machine Learning Models.” The technical assignment highlights the importance of ensuring the trustworthiness of machine learning models, especially in critical state-level applications such as healthcare, finance, mass media, defense, and more.
The client points out a central issue: the instability of machine learning model outputs to changes in input data. Even a minor change (such as adding invisible noise to an image) can significantly alter the model’s prediction for a new object that appears almost identical to the original.
The main goals of this procurement are:
- Research, systematize, and classify current methods for organizing, detecting, and countering adversarial attacks on machine learning models, as well as develop criteria for evaluating the resilience of existing models to such attacks.
- Develop an experimental software prototype for detecting and countering adversarial attacks on machine learning models and for assessing the resilience of existing models to such attacks.
Another tender, also valued at 36 million rubles, is part of a project to assess the security and identify possible attack scenarios on artificial intelligence technologies (AIT) using the example of the AI.Radiology service, which detects pathologies in chest X-ray images and was developed by the Innopolis University.
The objectives of this research project are:
- Test the AI.Radiology service for compliance with trust criteria and create a trusted version of the service;
- Develop trust criteria for AIT systems in medicine and create methodologies for building trusted AIT systems in healthcare.
For the contract on “Applying Trusted and Explainable AI Methods to Omics Data Analysis,” the contractor will need to analyze the current state of explainable AI methods; understand the challenges of applying machine learning models to high-dimensional data with few examples; and select trust criteria for the system in cases of retraining, correct predictions, incorrect feature selection, and adversarial attacks when the system’s confidence in its prediction is low.
The contractor must also analyze existing methods for detecting anomalies in input data that lead to low-confidence predictions, as well as conduct a range of studies related to AI data processing and minimize the risks of erroneous decisions by AI systems. The initial contract value is 45 million rubles.
All contracts are scheduled for completion by September 2024.