Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.
Federal Reserve Web Sites
Other Bank Regulatory Sites
March 25, 2019
Safeguarding Privacy and Ethics in AI
In a recent post I referred to the privacy and ethical guidelines that the nonprofit advocacy group EPIC (Electronic Privacy Information Center) is promoting. According to this group, these guidelines are based on existing regulatory and legal guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. Given the continued attention to advancements in machine learning and other computing technology advancements falling under the marketing term of “artificial intelligence” (AI), I thought it would be beneficial for our readers if we were to review these guidelines so the reader can assess their validity and completeness. The heading and the italicized text in these guidelines are EPIC’s specific wording; additional text is my commentary. It is important to point out that neither the Federal Reserve System nor the Board of Governors has endorsed these guidelines.
- Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. EPIC says the main elements of this principle can be found in the U.S. Privacy Act and a number of directives from the European Union. It is unlikely that the average person would be able to fully understand the complex computations generating a decision, but everyone still has the right to an explanation of and validation for the decision.
- Right to Human Determination. All individuals have the right to a final determination made by a person. This ensures that a person, not a machine, is ultimately accountable for a final decision.
- Identification Obligation. The institution responsible for an AI system must be made known to the public. There may be many different parties that contribute to an AI system, so it is important that anyone be able to determine which party has overall responsibility and accountability.
- Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions. I understand the intent of this principle—any program developed by a person will have some level of inherent bias—but how is it determined that the level of bias has reached an “unfair” level, and who makes such a determination?
- Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system. An AI system that presents significant risks, especially in the areas of public safety and cybersecurity, should be evaluated carefully before a deployment decision is made.
- Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions. This basic principle will be monitored by the institution as well as independent organizations.
- Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms. As an extension of number 6, detailed documentation and secure retention of the data input help other parties replicate the decision-making process to validate the final decision.
- Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls. As more Internet-of-Things applications are deployed, this principle will increase in importance.
- Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats. AI systems, especially those that could have a significant impact on public safety, are potential targets for criminals and terrorist groups and must be made secure.
- Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system. This principle ensures that the institution will not establish or maintain a separate, clandestine profiling system to assure the possibility of independent accountability.
- Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents. The concern this principle addresses is that such a score could be used to establish predetermined outcomes across a number of activities. For example, in the private sector, a credit rating score can be a factor not only in credit decisions but also in other types of decisions, such as for vehicle, life, and medical insurance underwriting.
- Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible. I refer to this final principal as the “HAL principle” from 2001: A Space Odyssey, where the crew tries to shut down HAL (a Heuristically programmed ALgorithmic computer) after it starts making faulty decisions. A crew member finally succeeds in shutting HAL down only after it has killed all the other crew members. HAL is an extreme example, but the principle ensures that an AI system’s actions do not override or contradict the actions and decision of the people responsible for the system.
On February 11, 2019, the president signed an executive order promoting the United States as a leader in the use of AI. In addition to addressing technical standards and workforce training, the order called for the protection of “civil liberties, privacy, and American values” in the application of AI systems. As the development of AI systems increases pace, it seems important that an ethical framework be put in place. Do you think these are reasonable and realistic guidelines that should be adopted? Do you think some of them will hinder the pace of AI application development? Are any principles missing?
Let us know what you think.
By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed
- Looking for Partners in Safer Payments
- The Range of Un-Friendly Fraud
- Payments Webinar October 10: Cash in the 21st Century
- "Insuring" Ransomware Will Continue to Flourish
- Designing Disclosures to Be Read
- Is There a Generation Gap in Cash Use?
- What the Most Convenient Food Tells Us about Payments
- Is Friction in Payments Always Bad?
- Why Should You Care about PSD2?
- At the Intersection of FinTech and Financial Inclusion
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- account takeovers
- ATM fraud
- bank supervision
- banking regulations
- banks and banking
- card networks
- check fraud
- consumer fraud
- consumer protection
- credit cards
- cross-border wires
- data security
- debit cards
- emerging payments
- financial services
- financial technology
- identity theft
- law enforcement
- mobile banking
- mobile money transfer
- mobile network operator (MNO)
- mobile payments
- money laundering
- money services business (MSB)
- online banking fraud
- online retail
- Payment Services Directive
- payments fraud
- payments innovation
- payments risk
- payments study
- payments systems
- phone fraud
- remotely created checks
- risk management
- Section 1073
- skills gap
- social networks
- third-party service provider
- trusted service manager
- Unfair and Deceptive Acts and Practices (UDAP)
- wire transfer fraud
- workforce development
- workplace fraud