During the "Using trustworthy AI to create a better world" session, it was noted that high efficiency declared by the developer should not cancel the need to maintain transparency and availability of verification tools for AI operation on the part of a consumer. Furthermore, they spoke about the need to develop national regulation of integrated AI technologies, primarily in the field of medicine. The "standard" efforts taken by developers at the technological level to control the quality and security of promising AI solutions is evidently not enough.
In the course of discussion, speakers suggested a common map for assessing AI technologies at various levels, which makes it possible to identify areas of responsibility between the developer and the country. According to the map, the national regulator sets certain parameters that the future AI product must meet, and the developer seeks to take all necessary measures at the engineering level to ensure the safety and explainability of the designed AI solution. Such distribution of responsibilities is expected to contribute to the emergence of AI technologies in civil society, which can be trustworthy to solve important problems without excessive control and re-checking by the AI system operator.
Yury Lindre, an analyst at the Competence Center for Global IT Cooperation, believes that several years ago, Russian experts stood at the origins of such "Trustworthy AI" concept. "It is a pleasure to realize that risks and threats, which our experts constantly talk about, are now being recognized at the international level. It's seen how the global consensus is shaping in relation to the decision-making process of an AI system, which should be understandable to a person. Otherwise, such AI systems should not be widely used, even despite the formally high efficiency," the specialist noted.