Top.Mail.Ru
Blueprint for an AI Bill of Rights
RUS
25.11.2022

Blueprint for an AI Bill of Rights

Blueprint for an AI Bill of Rights

The White House Science and Technology Policy Board has published the Blueprint for an AI Bill of Rights.

The main significance of the document is the formulation of basic principles to guide the design, use and implementation of automated systems that have the potential to significantly affect the rights, capabilities or access of the American public to critical resources or services. While the document is not binding and is not US government policy, many of its provisions reflect protections afforded by the US Constitution or implemented under existing US laws.

The search for compromise solutions and agreement on fundamental requirements for automated systems indicates a growing demand for their development and introduction by individual lobbying circles within the US government. First and foremost, leading developers providing IT management solutions to major US businesses and MNCs (these include Oracle, IBM, Intuit, ADP, etc.), as well as intelligence agencies and information control and surveillance agencies, are interested. 

It should be noted that the use of tools of this kind in public social management and monitoring practices is not an innovative phenomenon either. The use of such practices, in particular, has become commonplace in mainland China, including as part of the implementation of a social rating system. 

The preparation of basic principles of functioning of automated systems also brings the process of regulation of “big data” to a new stage. In particular, in recent years, there has been an urgent need to establish common rules for the collection and exploitation of user data in the public-private system. This is evidenced by the increasing trend for digital platforms to misuse users' public data, which increasingly leads to litigation, financial and reputational costs for all parties to the conflict. 

In doing so, lawmakers will need to strike a balance whereby the new AI Bill of Rights will not infringe on the rights of citizens while at the same time promoting a vibrant digital services marketplace and a deeper digitalization of American society in the whole.

Content of the Blueprint:  

Five principles are proposed as the basis for the application of automated systems in society: 

1. Participants should be protected from insecure and inefficient systems.  

2. Participants should not face discrimination by algorithms and systems should be used and designed in a fair way

3. Participants should be protected from misuse of data through built-in safeguards, and should have discretion over how their data is used.

4. Participants should be aware that an automated system is being used against them and understand how and why it contributes to the outcomes that affect them. 

5. Participants should be able to opt out of using the system where appropriate, and have access to a person who can quickly review and address the problems they encounter.

 

A description of each principle accompanies three key sections: 

1) Why the principle is important:

This section provides a summary of the problems that the principle seeks to address and against which citizens are supposed to be protected. In particular, examples and specific cases are given in which the operation of automated systems has shown some vulnerabilities in the rights of their users and participants.

2) What should be expected of automated systems:

Expectations for automated systems are intended to inform the development of additional technical standards and practices that should be tailored for particular sectors and contexts. 

This section also describes specific steps that can be implemented to realize the vision of this document. The expectations outlined reflect existing technology development practices, including pre-deployment testing, ongoing monitoring, and governance structures for automated systems. 

Finally, expectations about reporting are intended for the entity developing or using the automated system. The resulting reports may be made available to the public, regulators, auditors, industry standards groups or others engaged in independent verification, and should be made public as much as possible in accordance with law, regulation and policy. It is noted that considerations of intellectual property, law enforcement or national security may prevent public release. Where public reports are not possible, the information should be provided to oversight bodies and privacy, civil liberties or other ethics officers charged with safeguarding individuals’ rights. These reporting expectations are important for transparency so that the American people can have confidence that their rights, opportunities and access, as well as their expectations about technologies, are respected.

3) How these principles can move into the practice:

This section provides real-life examples of how these guiding principles can become reality through laws, policies and practices. It describes practical technical and sociotechnical approaches to protecting rights, opportunities and access. 

Points of interest to specialists in the US document include the following:

1.     The section “What to expect from automated systems” contains a provision on privacy protection by design and default. Although this approach has been used previously in a number of national and international instruments, the wording given in the US Blueprint may be of interest to specialists in terms of developing similar approaches and appropriate terminology:

“Automated systems should be designed and built with privacy protection by default. Privacy risks should be assessed throughout the development life cycle, including privacy risks from re-identification, and appropriate technical and policy measures should be implemented to mitigate them. This implies potential harm to those who are not users of the automated system but who may be affected by the derivation of data, a targeted breach of confidentiality, surveillance of the public or other harm to the public. Data collection should be kept to a minimum and clearly communicated to the people whose data are collected. Data should only be collected or used for training or testing machine learning models if such collection and use is legitimate and consistent with the expectations of the people whose data are collected. User experience research should be conducted to ensure that people understand what data is being collected about them and how it will be used, and that this collection is consistent with their expectations and desires.”

2.     The text explains what constitutes sensitive data. 

“Sensitive data: Data and metadata are sensitive if they relate to a person in a sensitive area (defined below); are generated by technologies used in the sensitive area; can be used to derive data from a sensitive area or sensitive data about a person (such as disability data, genomic data, biometric data, behavioural data, geolocation data, data related to criminal justice interactions, relationship history and legal status, such as custody and divorce information and environmental data from home, work or school); or have a reasonable potential to be used in ways that could cause significant harm to individuals, such as loss of privacy or financial loss due to identity theft. Data and metadata obtained from persons under the age of majority are also sensitive, even if they do not belong to the sensitive area. Such data includes, in particular, numerical, textual, graphical, audio or video data.

“Sensitive domains are areas where ongoing activities could cause significant harm, including significant negative impacts on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Areas that have historically been highlighted as deserving of enhanced data protection, or where the public has a reasonable expectation of such enhanced protection, include health care, family planning and care, employment, education, criminal justice, and personal finance, among others. In the context of this concept, such areas are considered sensitive regardless of whether the specific context requires a system of coverage under existing law, and the areas and data that are considered sensitive may change over time, depending on societal norms and context.” 

3.     The section “How these principles can move into practice” identifies some specific bodies and possible projects where these principles can be implemented in the development and use of technology - this increases the specificity of the document and its target setting. 



Internet
AI
Blueprint