About


Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.

Take On Payments

February 11, 2019


AI and Privacy: Achieving Coexistence

In a post early last year, I raised the issue of privacy rights in the use of big data. After attending the AI (artificial intelligence) Summit in New York City in December, I believe it is necessary to expand that call to the wider spectrum of technology that is under the banner of AI, including machine learning. There is no question that increased computing power, reduced costs, and improved developer skills have made machine learning programs more affordable and powerful. As discussed at the conference, the various facets of AI technology have reached far past financial services and fraud detection into numerous aspects of our life, including product marketing, health care, and public safety.

In May 2018, the White House announced the creation of the Select Committee on Artificial Intelligence. The main mission of the committee is "to improve the coordination of Federal efforts related to AI to ensure continued U.S. leadership in this field." It will operate under the National Science and Technology Committee and will have senior research and development officials from key governmental agencies. The White House's Office of Science and Technology Policy will oversee the committee.

Soon after, Congress established the National Security Commission on Artificial Intelligence in Title II, Section 238 of the 2019 John McCain National Defense Authorization Act. While the commission is independent, it operates within the executive branch. Composed of 15 members appointed by Congress and the Secretaries of Defense and Commerce—including representatives from Silicon Valley, academia, and NASA—the commission's aim is to "review advances in artificial intelligence, related machine learning developments, and associated technologies." It is also charged with looking at technologies that keep the United States competitive and considering the legal and ethical risks.

While the United States wants to retain its leadership position in AI, it cannot overlook AI's privacy and ethical implications. A national privacy advocacy group, EPIC (or the Electronic Privacy Information Center), has been lobbying hard to ensure that both the Select Committee on Artificial Intelligence and the National Security Commission on Artificial Intelligence obtain public input. EPIC has asked these groups to adopt the 12 Universal Guidelines for Artificial Intelligence released in October 2018 at the International Data Protection and Privacy Commissioners Conference in Brussels.

These guidelines, which I will discuss in more detail in a future post, are based on existing regulatory guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. They call out that any AI system with the potential to impact an individual's rights should have accountability and transparency and that humans should retain control over such systems.

As the strict privacy and data protection elements of the European Union's General Data Privacy Regulation take hold in Europe and spread to other parts of the world, I believe that privacy and ethical elements will gain a brighter spotlight and AI will be a major topic of discussion in 2019. What do you think?

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

February 11, 2019 in consumer protection, emerging payments, fintech, innovation, privacy, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

January 7, 2019


A New You: Synthetic Identity Fraud

With the start of the new year, you may have resolved to make a change in your life. Maybe you've even gone so far as to pledge to become a "new you." But someone may have already claimed that "new you," stealing your credentials and using them to create a new identity. Identity theft is a growing problem, resulting in millions of dollars in damage around the world. And now there is a modern twist to this old and costly problem: synthetic identity fraud. Panelists at a forum convened by the Government Accountability Office (GAO) define this problem as a "crime in which perpetrators combine real and/or fictitious information, such as Social Security numbers and names, to create identities with which they may defraud financial institutions, government agencies, or individuals." (Read forum highlights on the GAO website.) According to the U.S. Federal Trade Commission, synthetic identity fraud is the "fastest growing and hardest to detect" form of identity theft.

This graphic from the GAO illustrates how this type of identity fraud differs from what we have traditionally defined as identity theft.

GAPSIF

As this image shows, in traditional identity fraud, the criminal pretends to be another (real) person and uses his or her accounts. In synthetic identity fraud, the criminal establishes a new identity using a person's real details (such as social security number), combining this information with fictitious information to create a new credit record.

The challenge for the payments industry is determining whether an identity is planted or legitimate. For example, parents with excellent credit histories sometimes add their children to their existing credit accounts to give their children the benefit of their positive financial behavior. This action allows the children to kick-start their own credit records. Similarly, a criminal could plant a synthetic identity in an existing credit account and from there build a credit history for this identity. (In many cases, the criminal works for years on building a strong credit history for that false identity before "cashing out" and inflicting financial damages on a large scale.)

So what can consumers do to protect themselves? Here are some simple ways to make it harder for a thief to steal your personal information:

  • Shred documents containing personal information.
  • Do not provide your social security number to businesses unless you absolutely have to.
  • Use tools that monitor credit and identity usage.
  • Freeze your credit account as well as that of any of your minor children.
  • Check your accounts regularly to ensure that all transactions are legitimate and report any suspicious activity immediately.

Staying informed about synthetic identity fraud tactics and taking these steps to protect yourself can help you get one step closer to (preventing) "a new you."

Photo of Catherine Thaliath By Catherine Thaliath, project management expert in the Retail Payments Risk Forum at the Atlanta Fed

January 7, 2019 in authentication, consumer fraud, consumer protection, data security, fraud, identity theft | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

November 5, 2018


Organizational Muscle Memory and the Right of Boom

"Left of boom" is a military term that refers to crisis prevention and training. The idea is that resources are focused on preparing soldiers to prevent an explosion or crisis—the "boom!" The training they undergo in left of boom also helps the soldiers commit their response to a crisis, if it does happen, to muscle memory, so they will act quickly and efficiently in life-threatening situations.

Image-one

The concept of the boom timeline has been applied to many other circumstances, as I can personally attest. More years ago than I will admit to, I was a teller and had to participate in quarterly bank-robbery training that focused on each employee's role during and immediately after a robbery. The goal was to help us commit these procedures to muscle memory so that when we were faced with a high-stress situation, our actions would be second nature. My training was tested one day when I came face-to-face with a motorcycle-helmet-wearing bank robber who leaped over the counter into the teller area. Like most bank robbers, he was in and out fast, but thanks to muscle memory, we were springing into action as soon as he was leaping back over the counter and running out of the branch.

This type of muscle memory preparation has also been applied to cybersecurity. Organizations commit significant human and capital resources to the left of boom to help prevent and detect threats to their networks. Unfortunately, cybersecurity experts must get things right 100 percent of the time while bad actors have to be right only once. So how do organizations prepare for the right of boom?

Recently, I had the opportunity to observe a right-of-boom exercise that simulated a systemic cyberbreach of the payments system. This event, billed as the first of its kind, was sponsored by P20 and held in Cambridge, Massachusetts. Cybersecurity leaders from the payments industry convened to engage in a war games exercise that was ripped from the headlines. The scenario: a Thanksgiving Day cyberbreach, the day before the biggest shopping day of the year, of a multinational financial services company that included the theft and online posting of 75 million customer records, along with a ransomware attack that shut down the company's computer systems. The exercise began with a phone call from a reporter asking for the company's response to the posting of customer records online—BOOM! Immediately, the discussion turned to an incident response plan. What actions would be taken first? Who do you call? How do you communicate with employees if your system has been overtaken by a ransomware attack? How do you serve your customers? What point is the "in case of fire break glass" moment, meaning, has your organization defined what constitutes a crisis and agreed on when to initiate the crisis response plan?

An overarching theme was the importance of the "commander's intent," which reflects the priorities of the organization in the event of an incident. It empowers employees to exercise "disciplined initiative" and "accept prudent risk"—both principles associated with the military philosophy of "mission command"—so the company can return to its primary business as quickly as possible. In the context of a cyberbreach that has shut down communication channels within an organization, employees, in the absence of management guidance, can analyze the situation, make decisions, and then take action. The commander's intent forms the basis of an organization's comprehensive incident response plan and helps to create a shared understanding of organizational goals by identifying the key things your organization must execute to maintain operations.

Here is an example of a commander's intent statement:

Process all deposits and electronic transactions to ensure funds availability for all customers within established regulatory timeframes.

Having a plan in place where everyone from the top of the organization down understands their role and then practicing that plan until it becomes rote, much like my bank robbery experience, is critical today.

Photo of Ian Perry-Okara  By Nancy Donahue, project manager in the Retail Payments Risk Forum at the Atlanta Fed

 

November 5, 2018 in consumer protection, cybercrime, cybersecurity | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 10, 2018


The Case of the Disappearing ATM

The longtime distribution goal of a major soft drink company is to have their product "within an arm's reach of desire." This goal might also be applied to ATMs—the United States has one of the highest concentration of ATMs per adult. In a recent post, I highlighted some of the findings from an ATM locational study conducted by a team of economics professors from the University of North Florida. Among their findings, for example, was that of the approximately 470,000 ATMs and cash dispensers in the United States, about 59 percent have been placed and are operated by independent entrepreneurs. Further, these independently owned ATMs "tend to be located in areas with less population, lower population density, lower median and average income (household and disposable), lower labor force participation rate, less college-educated population, higher unemployment rate, and lower home values."

This finding directly relates to the issue of financial inclusion, an issue that is a concern of the Federal Reserve's. A 2016 study by Accenture pointed "to the ATM as one of the most important channels, which can be leveraged for the provision of basic financial services to the underserved." I think most would agree that the majority of the unbanked and underbanked population is likely to reside in the demographic areas described above. One could conclude that the independent ATM operators are fulfilling a demand of people in these areas for access to cash, their primary method of payment.

Unfortunately for these communities, a number of independent operators are having to shut down and remove their ATMs because their banking relationships are being terminated. These closures started in late 2014, but a larger wave of account closures has been occurring over the last several months. In many cases, the operators are given no reason for the sudden termination. Some operators believe their settlement bank views them as a high-risk business related to money laundering, since the primary product of the ATM is cash. Financial institutions may incorrectly group these operators with money service businesses (MSB), even though state regulators do not consider them to be MSBs. Earlier this year, the U.S. House Financial Services Subcommittee on Financial Institutions and Consumer Credit held a hearing over concerns that this de-risking could be blocking consumers' (and small businesses') access to financial products and services. You can watch the hearing on video (the hearing actually begins at 16:40).

While a financial institution should certainly monitor its customer accounts to ensure compliance with its risk tolerance and compliance policies, we have to ask if the independent ATM operators are being painted with a risk brush that is too broad. The reality is that it is extremely difficult for an ATM operator to funnel "dirty money" through an ATM. First, to gain access to the various ATM networks, the operator has to be sponsored by a financial institution (FI). In the sponsorship process, the FI rigorously reviews the operator's financial stability and other business operations as well as compliance with BSA/AML because the FI sponsor is ultimately responsible for any network violations. Second, the networks handling the transaction are completely independent from the ATM owners. They produce financial reports that show the amount of funds that an ATM dispenses in any given period and generate the settlement transactions. These networks maintain controls that clearly document the funds flowing through the ATM, and a review of the settlement account activity would quickly identify any suspicious activity.

The industry groups representing the independent ATM operators appear to have gained a sympathetic ear from legislators and, to some degree, regulators. But the sympathy hasn't extended to those financial institutions that are accelerating account closures in some areas. We will continue to monitor this issue and report any major developments. Please let us know your thoughts.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

September 10, 2018 in banks and banking, consumer protection, financial services, money laundering, regulations, regulators, third-party service provider | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 4, 2018


The First Step in Risk Management

One of the main objectives of information security is having a solid risk management strategy, which involves several areas: policy, compliance, third-party risk management, continuous improvement, and security automation and assessment, to name a few. This diagram illustrates at a high level the full cycle of a risk management strategy: adopting and implementing a framework or standards, which leads to conducting effective risk assessments, which then leads to maintaining continuous improvement.

Chart-image

One of the main objectives of information security is having a solid risk management strategy, which involves several areas: policy, compliance, third-party risk management, continuous improvement, and security automation and assessment, to name a few. This diagram illustrates at a high level the full cycle of a risk management strategy: adopting and implementing a framework or standards, which leads to conducting effective risk assessments, which then leads to maintaining continuous improvement.

There are more than 250 different security frameworks globally. Examples include the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity, the Capability Maturity Model Integration (CMMI)®, and the Center for Information Security's Critical Security Controls. (In addition, many industries have industry-specific standards and laws, such as health care's HIPAA, created by the Health Insurance Portability and Accountability Act.) Each framework is essentially a set of best practices that enables organizations to improve performance, important capabilities, and critical business processes surrounding information technology security.

But the bad news is that, on average, 4 percent of people in any given phishing campaign open an attachment or click a link—and it takes only one person to put a company or even an industry at risk. Does your overall strategy address that 4 percent and have a plan in place for their clicks? The report also found that the more phishing emails someone has clicked, the more they are likely to click in the future.

So, outside of complying with legal and regulatory requirements, how do you determine which framework or frameworks to adopt?

It depends! A Tenable Network Security report, Trends in Security Framework Adoption, provides insight into commonly adopted frameworks as well as the reasons companies have adopted them and how fully. Typically, organizations first consider security frameworks that have a strong reputation in their industries or for specific activities. They then look at compliance with regulations or mandates made by business relationships.

This chart shows reasons organizations have adopted the popular NIST Cybersecurity Framework.

Improving-critical-infrasture-cybersecurity-graph

The study found that there is no single security framework that the majority of companies use. Only 40 percent of respondents reported using a single security framework; many reported plans to adopt additional frameworks in the short term. Close to half of organizations (44 percent) reported they are using multiple frameworks in their security program; 15 percent of these are using three or more.

This year, the Federal Reserve System's Secure Payments Taskforce released Payment Lifecycles and Security Profiles, an informative resource that provides an overview of payments. Each payment type accompanies a list of applicable legal, regulatory, and industry-specific standards or frameworks. Spoiler alert: the lists are long and complex!

Let me point out a subsection appearing with each payment type that is of particular interest to this blog: "Challenges and Improvement Opportunities." Scroll through these subsections to see specific examples calling for more work on standards or frameworks.

Organizations need choices. But having too many frameworks to choose from, coupled with their constantly changing nature and the fluid payments environment, can complicate the implementation of a risk management strategy. With so many choices and so much in flux, how did you manage with step one of your risk management strategy?

Photo of Jessica Washington By Jessica Washington, AAP, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

September 4, 2018 in consumer protection, cybercrime, cybersecurity, payments risk, risk management | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

August 13, 2018


Protecting Our Senior Citizens from Financial Abuse

By all accounts, elder financial abuse appears to be a multi-billion-dollar problem. A 2011 New York State study found that, for every documented case of elder financial exploitation, more than 43 other cases went unreported. A 2015 report from True Link Financial estimates that nearly $17 billion is lost to financial exploitation, defined as the use of misleading or confusing language, often in conjunction with social pressure and tactics, to obtain a senior’s consent to take his or her money. According to the same report, another $6.7 billion is lost to caregiver abuse, which is deceit or theft by someone who has a trusting relationship with the victim, such as a family member, paid caregiver, attorney, or financial manager.

Over the last several months, Risk Forum members have had several conversations with boards and members of different regional payment associations. The topic of elder financial abuse and exploitation came up often. It has been over seven years since Take On Payments last explored the topic, so we are overdue for a post on the subject given both the interest from some of our constituents and new legislation around elder financial abuse recently signed into law.

With an aging baby boomer population representing the fasting growing segment of the population, awareness of the magnitude of elder financial abuse and an understanding of ways to identify and prevent it are critical to the well-being of our senior citizens. And that is exactly the intent of the Senior SAFE Act that on May 24 was passed by Congress and signed into law under Section 303 of the Economic Growth, Regulatory Relief, and Consumer Protection Act. Briefly, the act extends immunity from liability to certain individuals employed at financial institutions (and other covered entities) who, in good faith and with reasonable care, disclose the suspected exploitation of a senior citizen to a regulatory or law enforcement agency. The employing financial institutions are also immune from liability with respect to disclosures that these employees make. Before they were afforded immunity, banks and other financial-related institutions had privacy-violation concerns over disclosing financial information to other authorities. The new immunities are contingent on the financial institution developing and conducting employee training related to suspected financial exploitation of a senior citizen. The act also includes guidance regarding the content, timing, and record-keeping requirements of the training.

Massive underreporting of elder financial abuse and exploitation makes it difficult to estimate the amount of money lost. While the law does not require financial institutions to report suspected financial abuse and exploitation, it definitely encourages them to create employee educational programs by offering immunity. And those who know the Risk Forum well know that we are strong advocates of education. Elder financial abuse is a growing problem that must be tackled. How is this law changing your approach to reporting suspected cases of elder financial abuse and related employee education?

Photo of Douglas King By Douglas A. King, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

August 13, 2018 in consumer fraud, consumer protection | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

August 6, 2018


The FBI Is on the Case

I recently took advantage of a job shadow program in our Information Security Department (ISD). I joked with our chief information security officer that I was ready to "ride along" with his detectives for our own version of the television drama series Crime Scene Investigations (better known as CSI).

All jokes aside, I enjoyed working with ISD as part of the team rather than as an auditor, a role I have played in the past. We spent a good part of the day walking through layered security programs, vulnerability management, and data loss prevention. Underneath these efforts is an important principle for threat management: you can't defend against what you don't know.

Threat investigations absolutely must uncover, enumerate, and prioritize threats in a timely manner. Digging into each vulnerability hinges on information sharing through adaptable reporting mechanisms that allow ISD to react quickly. ISD also greatly depends on knowledge of high-level threat trends and what could be at stake.

It turns out that many payments professionals and law enforcement agencies also spend a large part of their time investigating threats in the payments system. After my job shadowing, I realized even more how important it is for our payments detectives to have access to efficient, modern information-sharing and threat-reporting tools to understand specific threat trends and loss potential.

One such tool is the Internet Crime Complaint Center (IC3). The FBI, which is the lead federal agency for investigating cyberattacks, established the center in May 2000 to receive complaints of internet crime. The mission of the IC3 is two-fold: to provide the public with a reliable and convenient reporting mechanism that captures suspected internet-facilitated criminal activity and to develop effective alliances with industry partners. The agency analyzes and disseminates the information, which contributes to law enforcement work and helps keep the public informed.

The annual IC3 report aggregates and highlights data provided by the general public. The IC3 staff analyze the data to identify trends in internet-facilitated crimes and what those trends may represent. This past year, the most prevalent crime types reported by victims were:

  • Nonpayment/Nondelivery
  • Personal data breach
  • Phishing

The top three crime types with the highest reported losses were:

  • Business email compromise
  • Confidence/Romance fraud
  • Nonpayment/Nondelivery

The report includes threat definitions, how these threats relate to payments businesses, what states are at the highest risk for breaches, and what dollar amounts correspond to each crime type. This is one tool available to uncover, enumerate, and prioritize threats to the payment ecosystem. Do you have other system layers in place to help you start your investigations? If you don't know, it might be time for you to take a "ride along" with your detectives.

Photo of Jessica Washington By Jessica Washington, AAP, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

August 6, 2018 in consumer fraud, consumer protection, cybercrime, cybersecurity, data security, fraud, identity theft, risk management | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 23, 2018


Learning about Card-Not-Present Fraud Mitigation

Over the last year, I have had the pleasure of working with Fed colleagues and other payments industry experts on one of the Accredited Standards Committee's X9A Financial Industry Standards workgroups in writing a technical report on U.S. card-not-present (CNP) fraud mitigation. You can download the final report (at no cost) from the ANSI (American National Standards Institute) web store.

As this blog and other industry publications have been forecasting for years, the migration to payment cards containing EMV chips may already be resulting in a reduction of counterfeit card fraud and an increase in CNP fraud and other fraudulent activity. This has been the trend in other countries that have gone through the chip card migration, and there was no reason to believe that it would be any different in the United States. The purpose of the technical report was to identify the major types of CNP fraud and present guidelines for mitigating these fraud attacks to the various payments industry stakeholders.

Graph-image-b

Source: Data from Card-Not-Present (CNP) Fraud Mitigation in the United States, the 2018 technical report prepared by the Accredited Standards Committee X9, Incorporated Financial Industry Standards

After an initial section identifying the primary stakeholders that CNP fraud affects, the technical report reviews five major CNP transaction scenarios, complete with transaction flow diagrams. The report continues with a detailed section of terms, definitions, and initialisms and acronyms.

The best defense against CNP fraud from an industry standpoint is the protection of data from being breached in the first place. Section 5 of the report reviews the role that data security takes in CNP fraud mitigation. It contains references to other documents providing detailed data protection recommendations.

Criminals will gather personal and payment data in various attacks against those who don't use strong data protection practices, so the next sections deal with the heart of CNP fraud mitigation.

  • Section 6 identifies the major types of CNP fraud attacks, both attacks that steal data and those that use that data to conduct fraudulent activities.
  • Section 7 reviews mitigation tools and approaches to take against such attacks. The section is subdivided into perspectives of various stakeholders, including merchants, merchant acquirers and gateways, issuers and issuer processors, and, finally, payment card networks.
  • Section 8 discusses how a stakeholder should identify key fraud performance metrics and then analyze, report, and track those metrics. While stakeholders will have different elements of metrics, they must each go to a sufficient level so the results will provide key insights and predictive indicators.

The report concludes with several annex sections (appendices) covering a variety of subjects related to CNP fraud. Suggestions for the improvement or revision of the technical report are welcome. Please send them to the X9 Committee Secretariat, Accredited Standards Committee X9 Inc., Financial Industry Standards, 275 West Street, Suite 107, Annapolis, MD 21401. I hope you will distribute this document among those in your institution involved with CNP fraud prevention, detection, and response to use as an educational or reference document. I think it will be quite useful.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

 

July 23, 2018 in card networks, cards, consumer fraud, consumer protection, cybercrime, cybersecurity, debit cards, identity theft | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 4, 2018


The GDPR's Impact on U.S. Consumers

If your email inbox is like mine, it's recently been flooded with messages from companies you’ve done online business with about changes in their terms and conditions, particularly regarding privacy. What has prompted this wave of notices is the May 25 implementation of Europe's General Data Protection Regulation (GDPR). Approved by the European Parliament in April 2016 after considerable debate, the regulation standardizes data privacy regulations across Europe for the protection of EU citizens.

The regulation applies to both data "controllers" and data "processors." A data controller is the organization that owns the data, while the data processor is an outside company that helps to manage or process that data. The focus of the GDPR requirements is on controllers and processors directly conducting business in the 28 countries that make up the European Union (EU). But the GDPR has the potential to affect businesses based in any country, including the United States, that collect or process the personal data of any EU citizen. Penalties for noncompliance can be quite severe. For that reason, many companies are choosing to err on the side of caution and sending to all their customers notices of changes to their privacy disclosure terms and conditions. Some companies have even gone so far as to provide the privacy protections contained in the GDPR to all their customers, EU citizens or not.

The GDPR has a number of major consumer protections:

  • Individuals can request that controllers erase all information collected on them that is not required for transaction processing. They can also ask the controller to stop companies from distributing that data any further and, with some exceptions, have third parties stop processing the data. (This provision is known as "data erasure" or the "right to be forgotten.")
  • Companies must design information technology systems to include privacy protection features. In addition, they must have a robust notification system in place for when breaches occur. After a breach, the data processor must notify the data controller "without undue delay." When the breach threatens "risk for the rights and freedoms of individuals," the data controller must notify the supervisory authority within 72 hours of discovery of the breach. Data controllers must also notify "without undue delay" the individuals whose information has been affected.
  • Individuals can request to be informed if the companies are obtaining their personal data and, if so, how they will use that data. Individual also have the right to obtain without charge electronic copies of collected data, and they may send that data to another company if they choose.

In addition, the GDPR requires large processing companies, as well as public authorities and other specified businesses, to designate a data protection officer to oversee the companies' compliance with the GDPR.

There have been numerous efforts in the United States to pass uniform privacy legislation, with little or no change. My colleague Doug King authored a post back in May 2015 about three cybersecurity bills under consideration that included privacy rights. Three years later, for each bill, either action has been suspended or it's still in committee. It will be interesting to see, as the influence of the GDPR spreads globally, whether there will be any additional efforts to pass similar legislation in the United States. What do you think?

And by the way, fraudsters are always looking for opportunities to install malware on your phones and other devices. We've heard reports of the criminal element using "update notice" emails. The messages, which appear to be legitimate, want the unsuspecting recipient to click on a link or open an attachment containing malware or a virus. So be careful!

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

June 4, 2018 in consumer protection, cybersecurity, data security, privacy, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 20, 2018


Best Practices for Data Privacy Policies

In my last couple of posts, I've discussed the issue of ethical policies related to data collection and analysis.  In the first one, I focused on why there is a need for such policies. The second post focused on ethical elements to include in policies directly involving the end user. Whether or not the customer is actively involved in accepting these policies, any company that collects data should have a strong privacy and protection policy. Unfortunately, based on the sheer number and magnitude of data breaches that have occurred, many companies clearly have not sufficiently implemented the protection element—resulting in the theft of personally identifiable information that can jeopardize an individual's financial well-being. In this post, the last of this series, I look at some best practices that appear in many data policies.

The average person cannot fathom the amount, scope, and velocity of personal data being collected. In fact, the power of big data has led to the origination of a new term. "Newborn data" describes new data created from analyses of multiple databases. While such aggregation can be beneficial in a number of cases—including for marketing, medical research, and fraud detection purposes—it has recently come to light that enemy forces could use data collected from wearable fitness devices worn by military personnel to determine the most likely paths and congregation points of military service personnel. As machine learning technology increases, newborn data will become more common, and it will be used in ways that no one considered when the original data was initially collected.

All this data collecting, sharing, and analyzing has resulted in a plethora of position papers on data policies containing all kinds of best practices, but the elements I see in most policies include the following:

  • Data must not be collected in violation of any regulation or statute, or in a deceptive manner.
  • The benefits and harms of data collection must be thoroughly evaluated, then how collected data will be used and by whom must be clearly defined.
  • Consent from the user should be obtained, when the information comes from direct user interaction, and the user should be given a full disclosure.
  • The quality of the data must be constantly and consistently evaluated.
  • A neutral party should periodically conduct a review to ensure adherence to the policy.
  • Protection of the data, especially data that is individualized, is paramount; there should be stringent protection controls in place to guard against both internal and external risks. An action plan should be developed in case there is a breach.
  • The position of data czar—one who has oversight of and accountability for an organization's data collection and usage—should be considered.
  • In the event of a compromise, the data breach action plan must be immediately implemented.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

February 20, 2018 in consumer protection, cybercrime, data security, identity theft, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


Archives


Categories


Powered by TypePad