About


Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.

Take On Payments

April 15, 2019


For Customer Education, Map Out the Long Journey

Financially savvy consumers are good customers for financial services. They save for retirement and pay back loans. Those are among the findings of research looking into the effects of formal financial education. And, as readers of this blog already know, customer education is central to risk management.

Using data from the National Financial Capability Study, researchers at the University of Nebraska found that financial education encouraged positive behaviors in the long run, such as saving for retirement or setting up an emergency fund. For short-run behavior, which the researchers defined as tasks that "give continual feedback," the evidence was mixed. They hypothesized that, in the short run, people learn good behavior better from getting negative feedback like late fees.

A paper by researchers at the Federal Reserve Board looked at three states (including Georgia, Idaho, and Texas) that began requiring financial education in 2007. Students in school after the requirement was implemented had higher relative credit scores and lower relative loan delinquencies than young people in bordering states without financial education. The effects lasted for four years after high school graduation. Among the goals of the Georgia curriculum is one that says students should be able to "apply rational decision making to personal spending and saving choices" and "evaluate the costs and benefits of using credit." Through age 22, the researchers found that the students who studied personal finance were better off than peers who had not, as measured by relative credit scores and delinquency rates.

What this means: if I learn in middle school that cost should factor into college choice, perhaps I'll decide to take on less student loan debt when it's time to choose a college. If one of my college professors stresses the importance of saving for retirement, perhaps I'll be more likely to make sure I participate in my employer's 401(k) and qualify for its full match. If I receive regular reminders about phishing attacks, perhaps I would be less likely to reply to or open a link in a phishy email.

April is Financial Literacy Month. For parents, teachers, and financial institutions, it's encouraging to know that split-second timing is not necessarily critical to effective financial learning. Financial education need not be delivered at life's crossroads, but everyone should have an overview of the route before getting on the road.

Finally, let me share some tips:

  • For parents of young children: Use these parent Q & A resources during story time. They are designed to help you talk about the importance of making careful decisions when saving versus spending and other personal finance topics related to their daily lives.
  • For teachers: The Federal Reserve Bank of Atlanta offers professional development programs for teachers, designed to enhance classroom instruction of economics and personal finance, including a free webinar on April 16, "Personal Finance Basics: Classroom Resources."

Photo of Claire Greene By Claire Greene, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

April 15, 2019 in consumer protection , risk management | Permalink | Comments ( 0)

April 8, 2019


Insuring Against Cyber Loss

Over the last few months, my colleagues and I have had multiple speaking engagements and discussions with banking and payments professionals on the topic of business email compromise (BEC). Generally, these discussions lead to talk about a risk management strategy or approach for this large, and growing, type of scam. One way some companies and financial institutions are mitigating their risk of financial loss to BEC and other cyber-related events is through a cyber-risk insurance policy. In a recent conversation, someone told me their cyber-insurance carrier mandates that they get an outside firm to audit and assess their cybersecurity strategy and practices, or they risk losing coverage.

According to a recent Wall Street Journal article, some large insurers are even going a step further and collaborating with each other to offer their own assessments of cybersecurity products and services available to businesses. Their results, which they will make publically available, will identify products and services they deem effective in reducing cybersecurity incidents and potentially qualify insured companies with improved policy terms and conditions if they use those products or services.

Cybersecurity vendors who would like their products and services to be assessed must apply by early May. They are not required to pay any fees for the evaluation. In light of the rising number of cyber-related events and increasing financial losses, along with the growing number of legal cases between companies and their insurance providers, this move by the insurance companies makes sense as a way for them to potentially reduce their exposure to cyber incidents. But it will be very interesting to see just how many cybersecurity vendors apply for participation in the program and how effective the insurers are at assessing the vendors' products and services. Moreover, for businesses, just using cybersecurity solutions helps them meet only part of the challenge. How they implement and maintain these solutions is critical to an effective cybersecurity approach.

Also of note in the Wall Street article is a graph that depicts the percentage of a particular global insurance company's clients, by industry, that have purchased a stand-alone cyber-insurance policy. Financial institutions, at 27 percent, rank last. Perhaps they are more confident in their cybersecurity strategies than are other industries, or perhaps insurers have no attractive stand-alone policies for financial institutions.

The cyber threat today is serious. In fact, Federal Reserve Board chairman Jerome Powell in a recent CBS 60 Minutes interview, when asked about a possible cyberattack on the U.S. banking system, responded that "cyber risk is a major focus—perhaps the major focus in terms of big risks."

As the Risk Forum continues to also focus on and monitor cyber risks, we look forward to the public findings from the insurers' collaborative assessment of cybersecurity products and services and will be interested to see if, over time, more financial institutions obtain cyber-risk insurance policies. I suspect the cyber-insurance industry will evolve in the products they offer and will continue to grow as companies look to mitigate their risks in the event of a cyber event.

What are your thoughts on this collaborative effort by the insurers? How do you see the cyber-insurance industry evolving? And do you think more financial institutions (or perhaps your own) will acquire cyber-insurance policies?

Photo of Douglas King By Douglas A. King, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

April 8, 2019 in banks and banking , cybercrime , cybersecurity | Permalink | Comments ( 0)

April 1, 2019


Contactless Cards: The Future King of Payments?

Just over two years ago, my colleague Doug King penned a post lamenting the lack of dual interface, or "contactless," chip payment cards in the United States. In addition to having the familiar embedded chip, a dual interface card contains a hidden antenna that allows the holder to tap the card on or wave it near the POS terminal. This is the same technology—near field communications (NFC)—that various pay wallets inside mobile devices use.

Doug is now doing his daily fitness runs with a bigger smile on his face as the indicators appear more and more promising that 2019 will be the year of the contactless card. Large issuers have been announcing plans to distribute dual interface cards either in mass reissues or as a cardholder's current card expires. Earlier this year, some of the global brand networks launched advertising campaigns to make customers aware of the convenience that contactless cards offer.

So why have U.S. issuers not moved on this idea before now? I think there have been several reasons. First, for the last several years, financial institutions have focused a lot of their resources on chip card migration. Contactless cards will create an additional expense for issuers and many of them wanted to let the market mature as it has done in a number of other countries. They were also concerned about the failure of contactless card programs that some of the large FIs introduced in the early 2000s—most merchants lacked terminals capable of handling the technology.

The EMV chip migration solved much of the merchant terminal acceptance problem as the vast majority of POS terminals upgraded to support EMV chips can also support contactless cards. (While a terminal may have the ability to support the technology, the merchant has to enable that support.) Visa claims that as of mid-2018, half of POS transactions in the United States were occurring at terminals that were contactless-enabled. Another factor favoring contactless transactions is the plan by major U.S. mass transit agencies to begin accepting contactless payment cards. According to the American Public Transportation Association's 2017 Ridership Report, there were 41 transit agencies in the United States with annual passenger trip volumes of over 20 million trips.

Given that consumer payments is largely a total sum environment, these developments have led me to ask myself and others what effect contactless cards will have on consumers' use of other payment forms—in particular, mobile payments. As my colleagues and I have written numerous times in this blog, mobile payments continue to struggle to obtain consumer adoption, despite earlier predictions that they would catch on quickly. There are some who believe that the convenience of ubiquity and fast transaction speed will favor the dual purpose card. Others think that the increased merchant acceptance of contactless will help push the mobile phone into becoming the primary payment form.

My personal perspective is that contactless cards will hinder the growth of in-person mobile payments. There are those who claim to leave their wallet at home and never their phone, and they will continue to be strong users of mobile payments. But the reality is that mobile payments are not accepted at all merchant locations, whereas payment cards are practically ubiquitous. While I am a frequent user of mobile payments, simply waving or tapping a card appeals to me. It's much more convenient than having to open the pay application on my phone, sign on, and then authorize the transaction.

Do you believe the adoption of contactless cards by consumers and merchants will be as successful as it was for EMV chip cards? And do you think that contactless cards will help or hinder the growth of mobile payments? Let us hear from you.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

April 1, 2019 in card networks , cards , emerging payments , EMV , innovation , mobile payments | Permalink | Comments ( 0)

March 25, 2019


Safeguarding Privacy and Ethics in AI

In a recent post I referred to the privacy and ethical guidelines that the nonprofit advocacy group EPIC (Electronic Privacy Information Center) is promoting. According to this group, these guidelines are based on existing regulatory and legal guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. Given the continued attention to advancements in machine learning and other computing technology advancements falling under the marketing term of “artificial intelligence” (AI), I thought it would be beneficial for our readers if we were to review these guidelines so the reader can assess their validity and completeness. The heading and the italicized text in these guidelines are EPIC’s specific wording; additional text is my commentary. It is important to point out that neither the Federal Reserve System nor the Board of Governors has endorsed these guidelines.

  • Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. EPIC says the main elements of this principle can be found in the U.S. Privacy Act and a number of directives from the European Union. It is unlikely that the average person would be able to fully understand the complex computations generating a decision, but everyone still has the right to an explanation of and validation for the decision.
  • Right to Human Determination. All individuals have the right to a final determination made by a person. This ensures that a person, not a machine, is ultimately accountable for a final decision.
  • Identification Obligation. The institution responsible for an AI system must be made known to the public. There may be many different parties that contribute to an AI system, so it is important that anyone be able to determine which party has overall responsibility and accountability.
  • Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions. I understand the intent of this principle—any program developed by a person will have some level of inherent bias—but how is it determined that the level of bias has reached an “unfair” level, and who makes such a determination?
  • Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system. An AI system that presents significant risks, especially in the areas of public safety and cybersecurity, should be evaluated carefully before a deployment decision is made.
  • Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions. This basic principle will be monitored by the institution as well as independent organizations.
  • Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms. As an extension of number 6, detailed documentation and secure retention of the data input help other parties replicate the decision-making process to validate the final decision.
  • Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls. As more Internet-of-Things applications are deployed, this principle will increase in importance.
  • Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats. AI systems, especially those that could have a significant impact on public safety, are potential targets for criminals and terrorist groups and must be made secure.
  • Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system. This principle ensures that the institution will not establish or maintain a separate, clandestine profiling system to assure the possibility of independent accountability.
  • Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents. The concern this principle addresses is that such a score could be used to establish predetermined outcomes across a number of activities. For example, in the private sector, a credit rating score can be a factor not only in credit decisions but also in other types of decisions, such as for vehicle, life, and medical insurance underwriting.
  • Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible. I refer to this final principal as the “HAL principle” from 2001: A Space Odyssey, where the crew tries to shut down HAL (a Heuristically programmed ALgorithmic computer) after it starts making faulty decisions. A crew member finally succeeds in shutting HAL down only after it has killed all the other crew members. HAL is an extreme example, but the principle ensures that an AI system’s actions do not override or contradict the actions and decision of the people responsible for the system.

On February 11, 2019, the president signed an executive order promoting the United States as a leader in the use of AI. In addition to addressing technical standards and workforce training, the order called for the protection of “civil liberties, privacy, and American values” in the application of AI systems. As the development of AI systems increases pace, it seems important that an ethical framework be put in place. Do you think these are reasonable and realistic guidelines that should be adopted? Do you think some of them will hinder the pace of AI application development? Are any principles missing?

Let us know what you think.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

March 25, 2019 in emerging payments , fintech , innovation | Permalink | Comments ( 0)

Google Search



Recent Posts


Archives


Categories


Powered by TypePad