About


Take On Payments, a blog sponsored by the Retail Payments Risk Forum of the Federal Reserve Bank of Atlanta, is intended to foster dialogue on emerging risks in retail payment systems and enhance collaborative efforts to improve risk detection and mitigation. We encourage your active participation in Take on Payments and look forward to collaborating with you.

Take On Payments

July 6, 2018


Attack of the Smart Refrigerator

We've all heard about refrigerators that automatically order groceries when they sense the current supply is running low or out. These smart refrigerators are what people usually point to when giving an example of an "internet-of-things" (IoT) device. Briefly, an IoT device is a physical device connected to the internet wirelessly that transmits data, sometimes without direct human interaction. I suspect most of you have at least one of these devices already operating in your home or office, whether it's a wireless router, baby monitor, or voice-activated assistant or "smart" lights, thermostats, security systems, or TVs.

Experts are forecasting that IoT device manufacturing will be one of the fastest growing industries over the next decade. Gartner estimates there were more than 8 billion connected IoT devices globally in 2017, with about $2 trillion going toward IoT endpoints and services. In 2020, the number of these devices will increase to more than 20 billion. But what security are manufacturers building into these devices to prevent monitoring or outside manipulation? What prevents someone from hacking into your security system and monitoring the patterns of your house or office or turning on your interior security cameras and invading your privacy? For those devices that can generate financial transactions, what authentication processes will ensure that transactions are legitimate? It's one kind of mistake to order an unneeded gallon of milk, but another one entirely to use that connection to access a home computer to monitor one's online banking transaction activity and capture log-on credentials.

As one would probably suspect, there is no simple or consistent answer to these security questions, but the overall track record of device security has not been a great one. There have been major DDOS attacks against websites using botnets composed of millions of IoT devices. Ransomware attacks have been made against consumers' home security systems and thermostats, forcing consumers to pay the extortionist to get their systems working again.

Some of the high-end devices such as the driverless cars and medical devices have been designed with security controls at the forefront, but most other manufacturers have given little thought to the criminal's ability to use a device to access and control other devices running on the same network. Adding to the problem is that many of these devices do not get software updates, including security patches.

With cybersecurity issues grabbing so many headlines, people are paying more and more attention to the role and impact of IoT devices. The National Institute of Standards and Technology (NIST) has begun efforts to develop security standards for cryptology that can operate within IoT devices. However, NIST estimates it will take two to four years to get the standard out.

In the meantime, the Department of Justice has some recommendations for securing IoT devices, including:

  • Research your device to determine security features. Does it have a changeable password? Does the manufacturer deliver security updates?
  • After you purchase a device and before you install it, download security updates and reset any default passwords.
  • If automatic updates are not provided to registered users, check at least monthly to determine if there are updates and download only from reputable sites.
  • Protect your routers and home Wi-Fi networks with firewalls, strong passwords, and security keys.

I see IoT device security as an issue that will continue to grow in importance. In a future post, I will discuss the privacy issues that IoT devices could create.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

July 6, 2018 in consumer fraud, cybercrime, cybersecurity, fraud, identity theft, innovation, online banking fraud, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 4, 2018


The GDPR's Impact on U.S. Consumers

If your email inbox is like mine, it's recently been flooded with messages from companies you’ve done online business with about changes in their terms and conditions, particularly regarding privacy. What has prompted this wave of notices is the May 25 implementation of Europe's General Data Protection Regulation (GDPR). Approved by the European Parliament in April 2016 after considerable debate, the regulation standardizes data privacy regulations across Europe for the protection of EU citizens.

The regulation applies to both data "controllers" and data "processors." A data controller is the organization that owns the data, while the data processor is an outside company that helps to manage or process that data. The focus of the GDPR requirements is on controllers and processors directly conducting business in the 28 countries that make up the European Union (EU). But the GDPR has the potential to affect businesses based in any country, including the United States, that collect or process the personal data of any EU citizen. Penalties for noncompliance can be quite severe. For that reason, many companies are choosing to err on the side of caution and sending to all their customers notices of changes to their privacy disclosure terms and conditions. Some companies have even gone so far as to provide the privacy protections contained in the GDPR to all their customers, EU citizens or not.

The GDPR has a number of major consumer protections:

  • Individuals can request that controllers erase all information collected on them that is not required for transaction processing. They can also ask the controller to stop companies from distributing that data any further and, with some exceptions, have third parties stop processing the data. (This provision is known as "data erasure" or the "right to be forgotten.")
  • Companies must design information technology systems to include privacy protection features. In addition, they must have a robust notification system in place for when breaches occur. After a breach, the data processor must notify the data controller "without undue delay." When the breach threatens "risk for the rights and freedoms of individuals," the data controller must notify the supervisory authority within 72 hours of discovery of the breach. Data controllers must also notify "without undue delay" the individuals whose information has been affected.
  • Individuals can request to be informed if the companies are obtaining their personal data and, if so, how they will use that data. Individual also have the right to obtain without charge electronic copies of collected data, and they may send that data to another company if they choose.

In addition, the GDPR requires large processing companies, as well as public authorities and other specified businesses, to designate a data protection officer to oversee the companies' compliance with the GDPR.

There have been numerous efforts in the United States to pass uniform privacy legislation, with little or no change. My colleague Doug King authored a post back in May 2015 about three cybersecurity bills under consideration that included privacy rights. Three years later, for each bill, either action has been suspended or it's still in committee. It will be interesting to see, as the influence of the GDPR spreads globally, whether there will be any additional efforts to pass similar legislation in the United States. What do you think?

And by the way, fraudsters are always looking for opportunities to install malware on your phones and other devices. We've heard reports of the criminal element using "update notice" emails. The messages, which appear to be legitimate, want the unsuspecting recipient to click on a link or open an attachment containing malware or a virus. So be careful!

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

June 4, 2018 in consumer protection, cybersecurity, data security, privacy, regulations | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

May 14, 2018


Is My Identity Still Mine?

I'm sure you've seen the famous cartoon by Peter Steiner published in the New Yorker in 1993. That cartoon alluded to the anonymity of internet users. Twenty-five years later, do you think it's still true? Or is the cartoon by Kaamran Hafeez that appeared in the February 23, 2015, issue of the New Yorker more realistic? Is online anonymity a thing of the past?

Cartoon-image

Having just returned from three days at the Connect: ID conference in Washington, DC, my personal perspective is that numerous key elements of my identity are already shared with thousands of others—businesses, governmental agencies, friends, business colleagues, and, unfortunately, criminals—and the numbers are growing. Some of this information I have voluntarily provided through my posts on various social media sites, but hopefully is available only to "friends." Other bits of my personal life have been captured by various governmental agencies—my property tax and voter registration records, for example. The websites I visit on the internet are tracked by various companies to customize advertisements sent to me. Despite the adamant disavowals of the manufacturers of voice assistant devices, rumors persist that some of the devices used in homes do more than just listen for a mention of their "wake up" name. And, of course, there is the 800-pound gorilla to consider: the numerous data breaches that retailers, financial institutions, health care providers, credit reporting agencies, and governmental agencies have experienced over the last five years.

The conference exhibit hall was filled with almost a hundred vendors who concentrated on this identity security issue. There were hardware manufacturers selling biometric capture devices of fingers, palms, hands, eyes, and faces. Others focused on customer authentication by marrying validation of a government-issued document such as a driver's license to live facial recognition. Remote identification and authentication of end users is becoming more and more common with our virtual storefronts and businesses, but is also becoming more challenging as the fraudsters look for ways to defeat the technology or overall process in some way.

I have yet to have my identity stolen or compromised, but notice I said "yet," and I have probably just jinxed myself. Unfortunately, I believe my identity is no longer just mine and is out there for the taking despite my personal efforts to minimize the availability of personal information. Do you agree?

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

May 14, 2018 in cybercrime, data security, fraud, identity theft, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

February 20, 2018


Best Practices for Data Privacy Policies

In my last couple of posts, I've discussed the issue of ethical policies related to data collection and analysis.  In the first one, I focused on why there is a need for such policies. The second post focused on ethical elements to include in policies directly involving the end user. Whether or not the customer is actively involved in accepting these policies, any company that collects data should have a strong privacy and protection policy. Unfortunately, based on the sheer number and magnitude of data breaches that have occurred, many companies clearly have not sufficiently implemented the protection element—resulting in the theft of personally identifiable information that can jeopardize an individual's financial well-being. In this post, the last of this series, I look at some best practices that appear in many data policies.

The average person cannot fathom the amount, scope, and velocity of personal data being collected. In fact, the power of big data has led to the origination of a new term. "Newborn data" describes new data created from analyses of multiple databases. While such aggregation can be beneficial in a number of cases—including for marketing, medical research, and fraud detection purposes—it has recently come to light that enemy forces could use data collected from wearable fitness devices worn by military personnel to determine the most likely paths and congregation points of military service personnel. As machine learning technology increases, newborn data will become more common, and it will be used in ways that no one considered when the original data was initially collected.

All this data collecting, sharing, and analyzing has resulted in a plethora of position papers on data policies containing all kinds of best practices, but the elements I see in most policies include the following:

  • Data must not be collected in violation of any regulation or statute, or in a deceptive manner.
  • The benefits and harms of data collection must be thoroughly evaluated, then how collected data will be used and by whom must be clearly defined.
  • Consent from the user should be obtained, when the information comes from direct user interaction, and the user should be given a full disclosure.
  • The quality of the data must be constantly and consistently evaluated.
  • A neutral party should periodically conduct a review to ensure adherence to the policy.
  • Protection of the data, especially data that is individualized, is paramount; there should be stringent protection controls in place to guard against both internal and external risks. An action plan should be developed in case there is a breach.
  • The position of data czar—one who has oversight of and accountability for an organization's data collection and usage—should be considered.
  • In the event of a compromise, the data breach action plan must be immediately implemented.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

February 20, 2018 in consumer protection, cybercrime, data security, identity theft, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

April 10, 2017


Catch Me If You Can

I recently became intrigued with a reality network television show that pitted teams of two everyday people (the "fugitives") against a diverse and highly experienced team of former law enforcement, military, and intelligence investigators (the "hunters"). The goal of the contest was for the fugitive team, given a one-hour head start, to elude capture for 28 days so they could collect a prize of $250,000 in the end. The fugitives were given a pot of $500, available only from an ATM, that they could use over the 28 days. But they had a $100 daily limit—and the knowledge that the hunters would be notified of the ATM location immediately. My interest was increased by the location: the fugitives' geographic boundaries were in the Southeast, with Atlanta as the hub, so there were frequent shots of local places that I recognized and had visited.

Underneath the entertainment value was a demonstration of the classic conflict between personal privacy and big-data analytics. This issue has become increasingly complicated as data collection, storage, and analytics have advanced and become less expensive, faster, and more sophisticated. At the same time, people are participating more in electronic communications, transactions, and other activities that create electronic footprints that can be tracked and analyzed. The show demonstrated these collection capabilities numerous times as the investigators poured over bank account transactions, phone records, social media, property and vehicle databases, and other information to identify clues as to the team's location or the people that might be assisting them.

Two of the nine fugitive teams were successful. In subsequent interviews, both teams cited a key factor they believed was critical to their success. They minimized or eliminated their use of cell phones, email, and social media—going off the grid—to avoid giving hints about their location. Knowing that their location would be signaled whenever they used an ATM to get money, they would have already made arrangements to leave the area immediately, before the hunters closed in. Several of the unsuccessful contestants remarked how amazed they were to discover the wide range of information the investigators were able to access about them, their family, and their friends. Some didn't know their location could be tracked through a cell phone or a photograph posted on social media.

Of course, these contestants, as well as any families and friends who might help them, had to sign numerous waivers to allow the investigators to access and collect much of this information. But how much information would be available without such a waiver or court order? In 2015, the European Union adopted an information privacy directive that is generally viewed as highly protective of an individual's privacy. In the United States, there have been discussions over recent years about similar legislation without much headway, mostly because of differences between there and here about data collection as well as First Amendment infringement.

Does there need to be increased transparency by companies that collect data for marketing purposes? Would clearer disclosures make consumers less likely to participate in rewards programs and other activities that involve data collection, to closely guard their personal information and interests? As always, we welcome your feedback.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

April 10, 2017 in privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

September 26, 2016


AdmiNISTering Passwords: New Conventional Wisdom

I have lived long enough to go through several cycles of "bad" foods that are now deemed not to be so bad after all. In the 1980s, we were warned that eggs and butter were bad for your heart due to their level of cholesterol. Now, decades of nutritional studies have led to a change in dietary guidelines that take into account that eggs provide an excellent source of protein, healthy fats, and a number of vitamins and minerals. Similar reversals have been issued for potatoes, many dairy products, peanut butter, and raw nuts.

Much to my surprise, much of the old, conventional wisdom about passwords has been spun on its heels with proposed digital authentication guidelines from the United States National Institute for Standards and Technology (NIST) and an article from the Federal Trade Commission's (FTC) Chief Technologist Lorrie Cranor regarding mandatory password changes. Some of NIST's recommendations include the following:

  • User-selected passwords should be a minimum of 8 characters and a maximum of 64 characters. Clearly size does matter as generally the longer the password, the more difficult it is to compromise
  • A password should be allowed to contain all printable ASCII characters including spaces as well as emojis.
  • Passwords should no longer require the user to follow specified character composition rules such as a combination of upper/lower case, numbers, and special characters.
  • Passwords should be screened against a list of prohibited passwords—such as "password"—to reduce the choice of easily compromised selections.
  • They should no longer support password hints as they often serve like a backdoor to guessing the password.
  • They should no longer use a knowledge-based authentication methodology—for example, city where you were born—as data breaches and publicly obtainable information has made this form of authentication weak.

The FTC's Cranor argues in her post that forcing users to change passwords at a set interval often leads to the user selecting weak passwords, and the longstanding security practice of mandatory password changes needs to be revisited. Her position, which is backed by recent research studies, is consistent with but not as strong as NIST's draft guideline that says that users should not be forced to change passwords unless there has been some type of compromise such as phishing or a data breach. Cranor's post does not represent an official position of the FTC and recommends that an organization perform its own risk-benefit analysis of mandatory password expiration and examine other password security options.

So while I finish my breakfast of eggs, hash browns (smothered and covered, of course), and buttered toast washed down with a large glass of milk, I will continue to ponder these suggestions. I would be interested in your perspective so please feel free to share it with us through your comments.

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

September 26, 2016 in identity theft, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

April 18, 2016


"I want to be alone; I just want to be alone"

This was spoken forlornly by the Russian ballerina Grusinskaya in the 1932 film Grand Hotel by the famously reclusive screen star Greta Garbo. This movie line causes me to occasionally wonder why we all can't just be left alone. Narrowed to payments, why does paying anonymously have to indicate you are hiding something nefarious?

Some of you may be asking why it would be necessary to hide anything. I offer the following examples of cases when someone would want to pay anonymously, either electronically or with cash.

  • Make an anonymous contribution to a charitable or political organization to avoid being hounded later for further contributions.
  • Make a large anonymous charitable contribution to avoid attention or the appearance of self-aggrandizement.
  • Recompense someone in need who may or may not be known personally with no expectation or wish to be repaid.
  • Pay anonymously at a merchant to avoid being tracked for unwelcome solicitations and offers.
  • Make a purchase for a legal but socially-frowned-upon good or service.
  • Shield payments from scrutiny for medical procedures or pharmacy purchases that are stigmatized.
  • Personally, use an anonymous form of payment to avoid letting my wife find out what she will be getting as a gift. (Don't worry; my spouse never reads my blogs so she doesn't know she needs to dig deeper to figure out what she is getting.)

Some of these cases can be handled easily with the anonymity of cash. As cash becomes less frequently used or accepted or perhaps even unsafe or impractical, what do we have as an alternative form of payment? Money orders such as those offered by the U.S. Postal Service are an option. The postal service places a cap of $1,000 on what can be paid for in cash. Nonreloadable prepaid cards such as gift cards offer some opportunity as long as the amount is below a certain threshold. Distributed networks like bitcoin offer some promise but may come with greater oversight and regulations in the future. Some emerging payment providers claim to offer services tailored for anonymous payments. Still, though, the future for a truly anonymous, ubiquitous payment alternative like cash doesn't look promising, given the current regulatory climate.

I acknowledge that one needs to find a proper balance between vigorously tackling financial fraud, money laundering, and terrorist financing and the need that I think most of us share for regulators and others to keep out of our personal business unless a compelling reason justifies such an intrusion. Consequently, we should be scrupulous about privacy but offer the investigatory tools when payments are used for nefarious purposes to identify the activities and the people involved. In many ways, this balancing act dovetails with the highly charged debate going on between the value of encryption and the needs of law enforcement and intelligence agencies to have the investigatory tools to read encrypted data. As Greta Garbo famously said and perhaps inadvertently foretold, some of us just want to be left alone.

Photo of Steven Cordray By Steven Cordray, payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

April 18, 2016 in privacy, regulators | Permalink

Comments

I like the open network and transparency that the blockchain offers. I find cash inefficient.

Posted by: Laura | April 20, 2016 at 11:12 AM

Upper middle-income and upper income consumers may not use cash much, but while shopping in certain big-box retailers, I have witnessed many consumers carrying lots of cash.

Posted by: John Olsen | April 18, 2016 at 02:04 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

July 13, 2015


Biometrics and Privacy, or Locking Down the Super-Secret Control Room

Consumer privacy has been a topic of concern for many years now, and Take on Payments has contributed its share to the discussions. Rewinding to a post from November 2013, you'll see the focus then was on how robust data collection could affect a consumer's privacy. While biometrics technology—such as fingerprint, voice, and facial recognition for authenticating consumers—is still in a nascent stage, its emergence has begun to take more and more of the spotlight in these consumer privacy conversations. We have all seen the movie and television crime shows that depict one person's fingerprints being planted at the crime scene or severed fingers or lifelike masks being used to fool an access-control system into granting an imposter access to the super-secret control room.

Setting aside the Hollywood dramatics, there certainly are valid privacy concerns about the capture and use of someone's biometric features. The banking industry has a responsibility to educate consumers about how the technology works and how it will be used in providing an enhanced security environment for their financial transaction activities. Understanding how their personal information will be protected will help consumers be likelier to accept it.

As I outlined in a recent working paper, "Improving Customer Authentication," a financial institution should provide the following information about the biometric technology they are looking to employ for their various applications:

  • Template versus image. A system collecting the biometric data elements and processing it through a complex mathematical algorithm creates a mathematical score called a template. The use of a template-based system provides greater privacy than a process that captures an image of the biometric feature and overlays it to the original image captured at enrollment. Image-based systems provide the potential that the biometric elements could be reproduced and used in an unauthorized manner.
  • Open versus closed. In a closed system, the biometric template will not be used for any other purpose than what is stated and will not be shared with any other party without the consumer's prior permission. An open system is one that allows the template to be shared among other groups (including law enforcement) and provides less privacy.
  • User versus institutional ownership. Currently, systems that give the user control and ownership of the biometric data are rare. Without user ownership, it is important to have a complete disclosure and agreement as to how the data can be used and whether the user can request that the template and other information be removed.
  • Retention. Will a user's biometric data be retained indefinitely, or will it be deleted after a certain amount of time or upon a certain event, such as when the user closes the account? Providing this information may soften a consumer's concerns about the data being kept by the financial institution long after the consumer sees no purpose for it.
  • Device versus central database storage. Storing biometric data securely on a device such as a mobile phone provides greater privacy than cloud-based storage system. Of course, the user should use strong security, including setting strong passwords and making sure the phone locks after a period of inactivity.

The more the consumer understands the whys and hows of biometrics authentication technology, I believe the greater their willingness to adopt such technology. Do you agree?

Photo of David Lott By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed

July 13, 2015 in biometrics, consumer protection, data security, privacy | Permalink

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 30, 2014


A Call to Action on Data Breaches?

I recently moved, so I had to go online to change my address with retailers, banks, and everyone else with whom I do business. It also seemed like an ideal opportunity to follow up on the recommendations that came out after the Heartbleed bug and diligently change all my passwords. Like many people, I had a habit of using similar passwords that I could recall relatively easily. Now, I am creating complex and different passwords for each site that would be more difficult for a fraudster to crack (and at the same time more difficult for me to remember) in an attack against my devices.

I have found myself worrying about a breach of my personal information more frequently since news of the Heartbleed bug. Before, if I heard about a breach of a certain retailer, I felt secure if I did not frequent that store or have their card. Occasionally, I would receive notification that my data "may" have been breached, and the threat seemed amorphous. But the frequency and breadth of data breaches are increasing, further evidenced by the recent breach of a major online retailer's customer records. This breach affects about 145 million people.

As a consumer, I find the balance between protecting my own data and my personal bandwidth daunting to maintain. I need to monitor any place that has my personal data, change passwords and security questions, and be constantly aware of the latest threat. Because I work in payments risk, this awareness comes more naturally for me than for most people. But what about consumers who have little time to focus on cybersecurity and need to rely on being notified and told specifically what to do when there's been a breach of their data? And are the action steps usually being suggested comprehensive enough to provide the maximum protection to the affected consumers?

Almost all states have data breach notification laws, and with recent breaches, a number of them are considering strengthening those laws. Congress has held hearings, federal bills have been proposed, and there has been much debate about whether there should be a consistent national data breach notification standard, but no direct action to create such a standard has taken place. Is it time now to do so, or does there need to be more major breaches before the momentum to create such a standard makes it happen?

Photo of Deborah Shaw

June 30, 2014 in consumer protection, cybercrime, data security, privacy | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a01053688c61a970c01a73de33351970d

Listed below are links to blogs that reference A Call to Action on Data Breaches?:

Comments

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

June 23, 2014


Do Consumers REALLY Care about Payments Privacy and Security?

Consumer research studies have consistently shown that a top obstacle to adopting new payment technologies such as mobile payments is consumers' concern over the privacy and security protections of the technology. Could it be that consumers are indeed concerned but believe that the responsibility for ensuring their privacy and security falls to others? A May 2014 research study by idRADAR revealed the conundrum that risk managers often face: they know that consumers are concerned with security, but they also know they are not active in protecting themselves by adopting strong practices to safeguard their online privacy and security.

The survey asked respondents if they had taken any actions after hearing of the Target breach to protect their privacy or to prevent credit/debit card fraudulent activity. A surprising 79 percent admitted they had done nothing. Despite the scope of the Target data breach, only 4 percent of the respondents indicated that they had signed up for the credit and identity monitoring service that retailers who had been affected offered at no charge (see the chart).

Consumers Post Breach Actions

In response to another question, this one asking about the frequency at which they changed their passwords, more than half (58 percent) admitted that they changed their personal e-mail or online passwords only when forced or prompted to do so. Fewer than 10 percent changed it monthly.

When we compare the results of this study with other consumer attitudinal studies, it becomes clear that the ability to get consumers to actually adopt strong security practices remains a major challenge. At "Portals and Rails, we will continue to stress the importance of efforts to educate consumers, and we ask that you join us in this effort.

Photo of Deborah Shaw

June 23, 2014 in consumer fraud, consumer protection, data security, identity theft, privacy | Permalink

Comments

Consumers have been hearing "the horror stories around the campfire" for so long, they have come to believe that if the "boogieman" is going to get you, there is nothing you can do about it. However, this is just not true. The FSO industry needs to promote consumer education efforts to update the public: we are each provided options every day that can serve to reduce our exposure to the fraud/ID theft boogieman - at FraudAvengers.org we call it "anti-fraud activism". Once aware, consumers will find themselves liberated to make choices based on their own risk tolerance about: how they make and receive payments; how they use their communication devices; the places in which they voluntarily place their personal information; ways and frequency of monitoring their financial, medical and other personal records; who and how they do business with people they have never met and/or do not know; etc. By ensuring we always include the "lessons learned" after we tell our horror stories, we serve to educate the public and inform them of protective actions they can take in their own defense. Crime collar criminals are always looking for victims: by reducing one's visibility to them and by proactively knowing what to watch-out for, consumers can greatly reduce the likelihood of becoming victims.

Posted by: Jodi Pratt | June 23, 2014 at 03:19 PM

Post a comment

Comments are moderated and will not appear until the moderator has approved them.

If you have a TypeKey or TypePad account, please Sign in

Google Search



Recent Posts


Archives


Categories


Powered by TypePad