Roskomnadzor got interested in an AI solution for video-based deception detection developed at ITMO University. A prototype was tested in the run-up to the United State Senate elections. The agency believes that the system could be used for deepfake detection. However, market participants doubt the viability of the solution, assuming that the probability of false activation will be too high.
Specialists at the National Center for Cognitive Research (NCCR) of ITMO University are creating an AI-based service, Expert, that will allow to check videos for lies and manipulation for the General Radio Frequency Center (GRFC) subordinate to Roskomnadzor.
The service makes it possible to examine videos for lies and is of interest in terms of fast deepfake detection, noted Alexander Fedotov, head of the Scientific and Technical Center of FSUE GRFC. “If ITMO is able to present a steadily working technology as a finished product, GRFC will look into the possibility of using it.”
According to ITMO representatives, the Expert system analyzes videos and audios, assessing the level of confidence, internal and external aggression, congruence (i.e., coherence between the information transmitted verbally and the one transmitted non-verbally), and inconsistency, and compares the wording with that of scientific papers and other specialists’ statements.
Today over 90 % of Russian AI-related products are developed on the basis of foreign open-source libraries, which were downloaded together with datasets for machine learning, explains a top manager of a specialized IT company. “This is why the ITMO’s product is tailored for English. A large amount of data and expenditures would be required to adjust it to Russian.” Similar products have already been worked on in the West: in 2017, the U.S. University of Maryland and Dartmouth College created a neural network for deception detection using videos from courtroom trials to train it. Innocence Project, a U.S. company, also worked on AI deception detection in court and used Amazon Mechanical Turk for analysis.
VisionLabs has expertise in developing deepfake detectors for facial images that are quite effective, achieving an accuracy of 94–100 %, according to the company. However, today probability theory is being used to assess the performance of neural networks, says a Kommersant’s source in the IT market. “When such product reaches the pilot operation stage, the probability of false activation will be rather high, and it will take years before the project is helpful.” Kirill Lyakhmanov, of the EBR law firm, stressed that the idea of determining the “truth” or “falsity” of statements by indirect signs related to people’s behavior does not hold water: “Any kinds of ‘lie detector’ test results are inadmissible as evidence in court in most countries.”