About


Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.

Take On Payments

November 5, 2018


Organizational Muscle Memory and the Right of Boom

"Left of boom" is a military term that refers to crisis prevention and training. The idea is that resources are focused on preparing soldiers to prevent an explosion or crisis—the "boom!" The training they undergo in left of boom also helps the soldiers commit their response to a crisis, if it does happen, to muscle memory, so they will act quickly and efficiently in life-threatening situations.

Image-one

The concept of the boom timeline has been applied to many other circumstances, as I can personally attest. More years ago than I will admit to, I was a teller and had to participate in quarterly bank-robbery training that focused on each employee's role during and immediately after a robbery. The goal was to help us commit these procedures to muscle memory so that when we were faced with a high-stress situation, our actions would be second nature. My training was tested one day when I came face-to-face with a motorcycle-helmet-wearing bank robber who leaped over the counter into the teller area. Like most bank robbers, he was in and out fast, but thanks to muscle memory, we were springing into action as soon as he was leaping back over the counter and running out of the branch.

This type of muscle memory preparation has also been applied to cybersecurity. Organizations commit significant human and capital resources to the left of boom to help prevent and detect threats to their networks. Unfortunately, cybersecurity experts must get things right 100 percent of the time while bad actors have to be right only once. So how do organizations prepare for the right of boom?

Recently, I had the opportunity to observe a right-of-boom exercise that simulated a systemic cyberbreach of the payments system. This event, billed as the first of its kind, was sponsored by P20 and held in Cambridge, Massachusetts. Cybersecurity leaders from the payments industry convened to engage in a war games exercise that was ripped from the headlines. The scenario: a Thanksgiving Day cyberbreach, the day before the biggest shopping day of the year, of a multinational financial services company that included the theft and online posting of 75 million customer records, along with a ransomware attack that shut down the company's computer systems. The exercise began with a phone call from a reporter asking for the company's response to the posting of customer records online—BOOM! Immediately, the discussion turned to an incident response plan. What actions would be taken first? Who do you call? How do you communicate with employees if your system has been overtaken by a ransomware attack? How do you serve your customers? What point is the "in case of fire break glass" moment, meaning, has your organization defined what constitutes a crisis and agreed on when to initiate the crisis response plan?

An overarching theme was the importance of the "commander's intent," which reflects the priorities of the organization in the event of an incident. It empowers employees to exercise "disciplined initiative" and "accept prudent risk"—both principles associated with the military philosophy of "mission command"—so the company can return to its primary business as quickly as possible. In the context of a cyberbreach that has shut down communication channels within an organization, employees, in the absence of management guidance, can analyze the situation, make decisions, and then take action. The commander's intent forms the basis of an organization's comprehensive incident response plan and helps to create a shared understanding of organizational goals by identifying the key things your organization must execute to maintain operations.

Here is an example of a commander's intent statement:

Process all deposits and electronic transactions to ensure funds availability for all customers within established regulatory timeframes.

Having a plan in place where everyone from the top of the organization down understands their role and then practicing that plan until it becomes rote, much like my bank robbery experience, is critical today.

Photo of Ian Perry-Okara  By Nancy Donahue, project manager in the Retail Payments Risk Forum at the Atlanta Fed

 

November 5, 2018 in consumer protection, cybercrime, cybersecurity | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 10, 2018


The Case of the Disappearing ATM

The longtime distribution goal of a major soft drink company is to have their product "within an arm's reach of desire." This goal might also be applied to ATMs—the United States has one of the highest concentration of ATMs per adult. In a recent post, I highlighted some of the findings from an ATM locational study conducted by a team of economics professors from the University of North Florida. Among their findings, for example, was that of the approximately 470,000 ATMs and cash dispensers in the United States, about 59 percent have been placed and are operated by independent entrepreneurs. Further, these independently owned ATMs "tend to be located in areas with less population, lower population density, lower median and average income (household and disposable), lower labor force participation rate, less college-educated population, higher unemployment rate, and lower home values."

This finding directly relates to the issue of financial inclusion, an issue that is a concern of the Federal Reserve's. A 2016 study by Accenture pointed "to the ATM as one of the most important channels, which can be leveraged for the provision of basic financial services to the underserved." I think most would agree that the majority of the unbanked and underbanked population is likely to reside in the demographic areas described above. One could conclude that the independent ATM operators are fulfilling a demand of people in these areas for access to cash, their primary method of payment.

Unfortunately for these communities, a number of independent operators are having to shut down and remove their ATMs because their banking relationships are being terminated. These closures started in late 2014, but a larger wave of account closures has been occurring over the last several months. In many cases, the operators are given no reason for the sudden termination. Some operators believe their settlement bank views them as a high-risk business related to money laundering, since the primary product of the ATM is cash. Financial institutions may incorrectly group these operators with money service businesses (MSB), even though state regulators do not consider them to be MSBs. Earlier this year, the U.S. House Financial Services Subcommittee on Financial Institutions and Consumer Credit held a hearing over concerns that this de-risking could be blocking consumers' (and small businesses') access to financial products and services. You can watch the hearing on video (the hearing actually begins at 16:40).

While a financial institution should certainly monitor its customer accounts to ensure compliance with its risk tolerance and compliance policies, we have to ask if the independent ATM operators are being painted with a risk brush that is too broad. The reality is that it is extremely difficult for an ATM operator to funnel "dirty money" through an ATM. First, to gain access to the various ATM networks, the operator has to be sponsored by a financial institution (FI). In the sponsorship process, the FI rigorously reviews the operator's financial stability and other business operations as well as compliance with BSA/AML because the FI sponsor is ultimately responsible for any network violations. Second, the networks handling the transaction are completely independent from the ATM owners. They produce financial reports that show the amount of funds that an ATM dispenses in any given period and generate the settlement transactions. These networks maintain controls that clearly document the funds flowing through the ATM, and a review of the settlement account activity would quickly identify any suspicious activity.

The industry groups representing the independent ATM operators appear to have gained a sympathetic ear from legislators and, to some degree, regulators. But the sympathy hasn't extended to those financial institutions that are accelerating account closures in some areas. We will continue to monitor this issue and report any major developments. Please let us know your thoughts.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

September 10, 2018 in banks and banking, consumer protection, financial services, money laundering, regulations, regulators, third-party service provider | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 4, 2018


The First Step in Risk Management

One of the main objectives of information security is having a solid risk management strategy, which involves several areas: policy, compliance, third-party risk management, continuous improvement, and security automation and assessment, to name a few. This diagram illustrates at a high level the full cycle of a risk management strategy: adopting and implementing a framework or standards, which leads to conducting effective risk assessments, which then leads to maintaining continuous improvement.

Chart-image

One of the main objectives of information security is having a solid risk management strategy, which involves several areas: policy, compliance, third-party risk management, continuous improvement, and security automation and assessment, to name a few. This diagram illustrates at a high level the full cycle of a risk management strategy: adopting and implementing a framework or standards, which leads to conducting effective risk assessments, which then leads to maintaining continuous improvement.

There are more than 250 different security frameworks globally. Examples include the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity, the Capability Maturity Model Integration (CMMI)®, and the Center for Information Security's Critical Security Controls. (In addition, many industries have industry-specific standards and laws, such as health care's HIPAA, created by the Health Insurance Portability and Accountability Act.) Each framework is essentially a set of best practices that enables organizations to improve performance, important capabilities, and critical business processes surrounding information technology security.

But the bad news is that, on average, 4 percent of people in any given phishing campaign open an attachment or click a link—and it takes only one person to put a company or even an industry at risk. Does your overall strategy address that 4 percent and have a plan in place for their clicks? The report also found that the more phishing emails someone has clicked, the more they are likely to click in the future.

So, outside of complying with legal and regulatory requirements, how do you determine which framework or frameworks to adopt?

It depends! A Tenable Network Security report, Trends in Security Framework Adoption, provides insight into commonly adopted frameworks as well as the reasons companies have adopted them and how fully. Typically, organizations first consider security frameworks that have a strong reputation in their industries or for specific activities. They then look at compliance with regulations or mandates made by business relationships.

This chart shows reasons organizations have adopted the popular NIST Cybersecurity Framework.

Improving-critical-infrasture-cybersecurity-graph

The study found that there is no single security framework that the majority of companies use. Only 40 percent of respondents reported using a single security framework; many reported plans to adopt additional frameworks in the short term. Close to half of organizations (44 percent) reported they are using multiple frameworks in their security program; 15 percent of these are using three or more.

This year, the Federal Reserve System's Secure Payments Taskforce released Payment Lifecycles and Security Profiles, an informative resource that provides an overview of payments. Each payment type accompanies a list of applicable legal, regulatory, and industry-specific standards or frameworks. Spoiler alert: the lists are long and complex!

Let me point out a subsection appearing with each payment type that is of particular interest to this blog: "Challenges and Improvement Opportunities." Scroll through these subsections to see specific examples calling for more work on standards or frameworks.

Organizations need choices. But having too many frameworks to choose from, coupled with their constantly changing nature and the fluid payments environment, can complicate the implementation of a risk management strategy. With so many choices and so much in flux, how did you manage with step one of your risk management strategy?

Photo of Jessica Washington By Jessica Washington, AAP, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

September 4, 2018 in consumer protection, cybercrime, cybersecurity, payments risk, risk management | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

August 13, 2018


Protecting Our Senior Citizens from Financial Abuse

By all accounts, elder financial abuse appears to be a multi-billion-dollar problem. A 2011 New York State study found that, for every documented case of elder financial exploitation, more than 43 other cases went unreported. A 2015 report from True Link Financial estimates that nearly $17 billion is lost to financial exploitation, defined as the use of misleading or confusing language, often in conjunction with social pressure and tactics, to obtain a senior’s consent to take his or her money. According to the same report, another $6.7 billion is lost to caregiver abuse, which is deceit or theft by someone who has a trusting relationship with the victim, such as a family member, paid caregiver, attorney, or financial manager.

Over the last several months, Risk Forum members have had several conversations with boards and members of different regional payment associations. The topic of elder financial abuse and exploitation came up often. It has been over seven years since Take On Payments last explored the topic, so we are overdue for a post on the subject given both the interest from some of our constituents and new legislation around elder financial abuse recently signed into law.

With an aging baby boomer population representing the fasting growing segment of the population, awareness of the magnitude of elder financial abuse and an understanding of ways to identify and prevent it are critical to the well-being of our senior citizens. And that is exactly the intent of the Senior SAFE Act that on May 24 was passed by Congress and signed into law under Section 303 of the Economic Growth, Regulatory Relief, and Consumer Protection Act. Briefly, the act extends immunity from liability to certain individuals employed at financial institutions (and other covered entities) who, in good faith and with reasonable care, disclose the suspected exploitation of a senior citizen to a regulatory or law enforcement agency. The employing financial institutions are also immune from liability with respect to disclosures that these employees make. Before they were afforded immunity, banks and other financial-related institutions had privacy-violation concerns over disclosing financial information to other authorities. The new immunities are contingent on the financial institution developing and conducting employee training related to suspected financial exploitation of a senior citizen. The act also includes guidance regarding the content, timing, and record-keeping requirements of the training.

Massive underreporting of elder financial abuse and exploitation makes it difficult to estimate the amount of money lost. While the law does not require financial institutions to report suspected financial abuse and exploitation, it definitely encourages them to create employee educational programs by offering immunity. And those who know the Risk Forum well know that we are strong advocates of education. Elder financial abuse is a growing problem that must be tackled. How is this law changing your approach to reporting suspected cases of elder financial abuse and related employee education?

Photo of Douglas King By Douglas A. King, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

August 13, 2018 in consumer fraud, consumer protection | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

August 6, 2018


The FBI Is on the Case

I recently took advantage of a job shadow program in our Information Security Department (ISD). I joked with our chief information security officer that I was ready to "ride along" with his detectives for our own version of the television drama series Crime Scene Investigations (better known as CSI).

All jokes aside, I enjoyed working with ISD as part of the team rather than as an auditor, a role I have played in the past. We spent a good part of the day walking through layered security programs, vulnerability management, and data loss prevention. Underneath these efforts is an important principle for threat management: you can't defend against what you don't know.

Threat investigations absolutely must uncover, enumerate, and prioritize threats in a timely manner. Digging into each vulnerability hinges on information sharing through adaptable reporting mechanisms that allow ISD to react quickly. ISD also greatly depends on knowledge of high-level threat trends and what could be at stake.

It turns out that many payments professionals and law enforcement agencies also spend a large part of their time investigating threats in the payments system. After my job shadowing, I realized even more how important it is for our payments detectives to have access to efficient, modern information-sharing and threat-reporting tools to understand specific threat trends and loss potential.

One such tool is the Internet Crime Complaint Center (IC3). The FBI, which is the lead federal agency for investigating cyberattacks, established the center in May 2000 to receive complaints of internet crime. The mission of the IC3 is two-fold: to provide the public with a reliable and convenient reporting mechanism that captures suspected internet-facilitated criminal activity and to develop effective alliances with industry partners. The agency analyzes and disseminates the information, which contributes to law enforcement work and helps keep the public informed.

The annual IC3 report aggregates and highlights data provided by the general public. The IC3 staff analyze the data to identify trends in internet-facilitated crimes and what those trends may represent. This past year, the most prevalent crime types reported by victims were:

  • Nonpayment/Nondelivery
  • Personal data breach
  • Phishing

The top three crime types with the highest reported losses were:

  • Business email compromise
  • Confidence/Romance fraud
  • Nonpayment/Nondelivery

The report includes threat definitions, how these threats relate to payments businesses, what states are at the highest risk for breaches, and what dollar amounts correspond to each crime type. This is one tool available to uncover, enumerate, and prioritize threats to the payment ecosystem. Do you have other system layers in place to help you start your investigations? If you don't know, it might be time for you to take a "ride along" with your detectives.

Photo of Jessica Washington By Jessica Washington, AAP, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

August 6, 2018 in consumer fraud, consumer protection, cybercrime, cybersecurity, data security, fraud, identity theft, risk management | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 23, 2018


Learning about Card-Not-Present Fraud Mitigation

Over the last year, I have had the pleasure of working with Fed colleagues and other payments industry experts on one of the Accredited Standards Committee's X9A Financial Industry Standards workgroups in writing a technical report on U.S. card-not-present (CNP) fraud mitigation. You can download the final report (at no cost) from the ANSI (American National Standards Institute) web store.

As this blog and other industry publications have been forecasting for years, the migration to payment cards containing EMV chips may already be resulting in a reduction of counterfeit card fraud and an increase in CNP fraud and other fraudulent activity. This has been the trend in other countries that have gone through the chip card migration, and there was no reason to believe that it would be any different in the United States. The purpose of the technical report was to identify the major types of CNP fraud and present guidelines for mitigating these fraud attacks to the various payments industry stakeholders.

Graph-image-b

Source: Data from Card-Not-Present (CNP) Fraud Mitigation in the United States, the 2018 technical report prepared by the Accredited Standards Committee X9, Incorporated Financial Industry Standards

After an initial section identifying the primary stakeholders that CNP fraud affects, the technical report reviews five major CNP transaction scenarios, complete with transaction flow diagrams. The report continues with a detailed section of terms, definitions, and initialisms and acronyms.

The best defense against CNP fraud from an industry standpoint is the protection of data from being breached in the first place. Section 5 of the report reviews the role that data security takes in CNP fraud mitigation. It contains references to other documents providing detailed data protection recommendations.

Criminals will gather personal and payment data in various attacks against those who don't use strong data protection practices, so the next sections deal with the heart of CNP fraud mitigation.

  • Section 6 identifies the major types of CNP fraud attacks, both attacks that steal data and those that use that data to conduct fraudulent activities.
  • Section 7 reviews mitigation tools and approaches to take against such attacks. The section is subdivided into perspectives of various stakeholders, including merchants, merchant acquirers and gateways, issuers and issuer processors, and, finally, payment card networks.
  • Section 8 discusses how a stakeholder should identify key fraud performance metrics and then analyze, report, and track those metrics. While stakeholders will have different elements of metrics, they must each go to a sufficient level so the results will provide key insights and predictive indicators.

The report concludes with several annex sections (appendices) covering a variety of subjects related to CNP fraud. Suggestions for the improvement or revision of the technical report are welcome. Please send them to the X9 Committee Secretariat, Accredited Standards Committee X9 Inc., Financial Industry Standards, 275 West Street, Suite 107, Annapolis, MD 21401. I hope you will distribute this document among those in your institution involved with CNP fraud prevention, detection, and response to use as an educational or reference document. I think it will be quite useful.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

July 23, 2018 in card networks, cards, consumer fraud, consumer protection, cybercrime, cybersecurity, debit cards, identity theft | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 4, 2018


The GDPR's Impact on U.S. Consumers

If your email inbox is like mine, it's recently been flooded with messages from companies you’ve done online business with about changes in their terms and conditions, particularly regarding privacy. What has prompted this wave of notices is the May 25 implementation of Europe's General Data Protection Regulation (GDPR). Approved by the European Parliament in April 2016 after considerable debate, the regulation standardizes data privacy regulations across Europe for the protection of EU citizens.

The regulation applies to both data "controllers" and data "processors." A data controller is the organization that owns the data, while the data processor is an outside company that helps to manage or process that data. The focus of the GDPR requirements is on controllers and processors directly conducting business in the 28 countries that make up the European Union (EU). But the GDPR has the potential to affect businesses based in any country, including the United States, that collect or process the personal data of any EU citizen. Penalties for noncompliance can be quite severe. For that reason, many companies are choosing to err on the side of caution and sending to all their customers notices of changes to their privacy disclosure terms and conditions. Some companies have even gone so far as to provide the privacy protections contained in the GDPR to all their customers, EU citizens or not.

The GDPR has a number of major consumer protections:

  • Individuals can request that controllers erase all information collected on them that is not required for transaction processing. They can also ask the controller to stop companies from distributing that data any further and, with some exceptions, have third parties stop processing the data. (This provision is known as "data erasure" or the "right to be forgotten.")
  • Companies must design information technology systems to include privacy protection features. In addition, they must have a robust notification system in place for when breaches occur. After a breach, the data processor must notify the data controller "without undue delay." When the breach threatens "risk for the rights and freedoms of individuals," the data controller must notify the supervisory authority within 72 hours of discovery of the breach. Data controllers must also notify "without undue delay" the individuals whose information has been affected.
  • Individuals can request to be informed if the companies are obtaining their personal data and, if so, how they will use that data. Individual also have the right to obtain without charge electronic copies of collected data, and they may send that data to another company if they choose.

In addition, the GDPR requires large processing companies, as well as public authorities and other specified businesses, to designate a data protection officer to oversee the companies' compliance with the GDPR.

There have been numerous efforts in the United States to pass uniform privacy legislation, with little or no change. My colleague Doug King authored a post back in May 2015 about three cybersecurity bills under consideration that included privacy rights. Three years later, for each bill, either action has been suspended or it's still in committee. It will be interesting to see, as the influence of the GDPR spreads globally, whether there will be any additional efforts to pass similar legislation in the United States. What do you think?

And by the way, fraudsters are always looking for opportunities to install malware on your phones and other devices. We've heard reports of the criminal element using "update notice" emails. The messages, which appear to be legitimate, want the unsuspecting recipient to click on a link or open an attachment containing malware or a virus. So be careful!

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

June 4, 2018 in consumer protection, cybersecurity, data security, privacy, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 20, 2018


Best Practices for Data Privacy Policies

In my last couple of posts, I've discussed the issue of ethical policies related to data collection and analysis.  In the first one, I focused on why there is a need for such policies. The second post focused on ethical elements to include in policies directly involving the end user. Whether or not the customer is actively involved in accepting these policies, any company that collects data should have a strong privacy and protection policy. Unfortunately, based on the sheer number and magnitude of data breaches that have occurred, many companies clearly have not sufficiently implemented the protection element—resulting in the theft of personally identifiable information that can jeopardize an individual's financial well-being. In this post, the last of this series, I look at some best practices that appear in many data policies.

The average person cannot fathom the amount, scope, and velocity of personal data being collected. In fact, the power of big data has led to the origination of a new term. "Newborn data" describes new data created from analyses of multiple databases. While such aggregation can be beneficial in a number of cases—including for marketing, medical research, and fraud detection purposes—it has recently come to light that enemy forces could use data collected from wearable fitness devices worn by military personnel to determine the most likely paths and congregation points of military service personnel. As machine learning technology increases, newborn data will become more common, and it will be used in ways that no one considered when the original data was initially collected.

All this data collecting, sharing, and analyzing has resulted in a plethora of position papers on data policies containing all kinds of best practices, but the elements I see in most policies include the following:

  • Data must not be collected in violation of any regulation or statute, or in a deceptive manner.
  • The benefits and harms of data collection must be thoroughly evaluated, then how collected data will be used and by whom must be clearly defined.
  • Consent from the user should be obtained, when the information comes from direct user interaction, and the user should be given a full disclosure.
  • The quality of the data must be constantly and consistently evaluated.
  • A neutral party should periodically conduct a review to ensure adherence to the policy.
  • Protection of the data, especially data that is individualized, is paramount; there should be stringent protection controls in place to guard against both internal and external risks. An action plan should be developed in case there is a breach.
  • The position of data czar—one who has oversight of and accountability for an organization's data collection and usage—should be considered.
  • In the event of a compromise, the data breach action plan must be immediately implemented.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

February 20, 2018 in consumer protection, cybercrime, data security, identity theft, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 5, 2018


Elements of an Ethical Data Policy

In my last post, I introduced the issue of ethical considerations around data collection and analytics. I asked if, along with the significant expansion of technical capabilities over the last five years, there had been a corresponding increase in awareness and advancement of the ethical issues of big data, such as privacy, confidentiality, transparency, protection, and ownership. In this post, I focus on the disclosures provided to users and suggest some elements concerning ethics to include in them.

The complexities of ethics policies

As I've researched the topic of big data, I've come to realize that developing a universal ethics policy will be difficult because of the diversity of data that's collected and the many uses for this data—in the areas of finance, marketing, medicine, law enforcement, social science, and politics, to name just a few.

Privacy and data usage policies are often disclosed to users signing up for particular applications, products, or services. My experience has been that the details about the data being collected are hidden in the customer agreement. Normally, the agreement offers no "opt-out" of any specific elements, so users must either decline the service altogether or begrudgingly accept the conditions wholesale.

But what about the databases that are part of public records? Often these public records are created without any direct communication with the affected individuals. Did you know that in most states, property records at the county level are available online to anyone? You can look up property ownership by name or address and find out the sales history of the property, including prices, square footage, number of bedrooms and baths, often a floor plan, and even the name of the mortgage company—all useful information for performing a pricing analysis for comparable properties, but also useful for a criminal to combine with other socially engineered information for an account takeover or new-account fraud attempt. Doesn't it seem reasonable that I should receive a notification or be able to document when someone makes such an inquiry on my own property record?

Addressing issues in the disclosure

Often, particularly with financial instruments and medical information, those collecting data must comply with regulations that require specific disclosures and ways to handle the data. The following elements together can serve as a good benchmark in the development of an ethical data policy disclosure:

  • Type of data collected and usage. What type of data are being collected and how will that data be used? Will the data be retained at the individual level or aggregated, thereby preventing identification of individuals? Can the data be sold to third parties?
  • Accuracy. Can an individual review the data and submit corrections?
  • Protection. Are people notified how their data will be protected, at least in general terms, from unauthorized access? Are they told how they will be notified if there is a breach?
  • Public versus private system. Is it a private system that usually restricts access, or a public system that usually allows broad access?
  • Open versus closed. Is it a closed system, which prevents sharing, or is it open? If it's open, how will the information will be shared, at what level, and with whom? An example of an open system is one that collects information for a governmental background check and potentially shares that information with other governmental or law enforcement agencies.
  • Optional versus mandatory. Can individuals decline participation in the data collection, or decline specific elements? Or is the individual required to participate such that refusal results in some sort of punitive action?
  • Fixed versus indefinite duration. Will the captured data be deleted or destroyed on a timetable or in response to an event—for example, two years after an account is closed? Or will it be retained indefinitely?
  • Data ownership. Do individuals own and control their own data? Biometric data stored on a mobile phone, for example, are not also stored on a central storage site. On the other hand, institutions may retain ownership. Few programs are under user ownership, although legal rights governing how the data can be used may be made by agreement.

What elements have I missed? Do you have anything to suggest?

In my next post, I will discuss appropriate guiding principles in those circumstance when individuals have no direct interaction with the collection effort.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed


February 5, 2018 in consumer protection, innovation, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

January 29, 2018


Big Data, Big Dilemma

Five years ago, I authored a post discussing the advantages and pitfalls of "big data." Since then, data analytics has come to the forefront of computer science, with data analyst being among the most sought-after talents across many industries. One of my nephews, a month out of college (graduating with honors with a dual degree in computer science and statistics) was hired by a rail transportation carrier to work on freight movement efficiency using data analytics—with a starting salary of more than $100,000.

Big data, machine learning, deep learning, artificial intelligence—these are terms we constantly see and hear in technology articles, webinars, and conferences. Some of this usage is marketing hype, but clearly the significant increases in computing power at lower costs have empowered a continued expansion in data analytical capability across a wide range of businesses including consumer products and marketing, financial services, and health care. But along with this expansion of technical capability, has there been a corresponding heightened awareness of the ethical issues of big data? Have we fully considered issues such as privacy, confidentiality, transparency, and ownership?

In 2014, the Executive Office of the President issued a report on big data privacy issues. The report was prefaced with a letter that included this caution:

Big data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace. Americans' relationship with data should expand, not diminish, their opportunities and potential.

(The report was updated in May 2016.)

In the European Union, the 2016 General Data Protection Regulation was adopted (enforceable after 2018); it provides for citizens of the European Union (EU) to have significant control over their personal data as well as to control the exportation of that data outside of the EU. Although numerous bills have been proposed in the U.S. Congress for cybersecurity, including around data collection and protection (see Doug King’s 2015 post), nothing has been passed to date despite the continuing announcements of data breaches. We have to go all the way back to the Privacy Act of 1974 for federal privacy legislation (other than constitutional rights) and that act only dealt with the collection and usage of data on individuals by federal agencies.

In a future blog post, I will give my perspective on what I believe to be the critical elements in developing a data collection and usage policy that addresses ethical issues in both overt and covert programs. In the interim, I would like to hear from you as to your perspective on this topic.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

January 29, 2018 in consumer protection, innovation, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


Archives


Categories


Powered by TypePad