By now, we should all know that anything that you post or is posted about you on the Internet will likely be there forever, and all of our actions will ultimately be available for all eyes to see. And while the older generations of users have long been skeptical about Internet privacy and security, it is actually Millennials and teens that have more quickly learned how to take advantage of applications and tools that are more likely to protect their conversations and data.
This may help explain the rise of Snapchat and other more private messaging platforms over the past few years, and why younger consumers prefer these to Facebook and Twitter. But it’s not just social media that causes poster’s remorse. A recent poll showed 88 percent of Americans support a “right to be forgotten”, or the power to tell Google and other Internet providers to remove old and/or embarrassing content.
Over the past few months, young and old Internet users alike, in the U.S. and around the globe, have had Internet data privacy issues come to their immediate attention. Facebook’s Cambridge Analytica scandal has mainstreamed what might be called “data consciousness” in the U.S. In Europe, meanwhile, GDPR (the EU’s General Data Protection Regulation) has empowered consumers to take control of how businesses use their data. Businesses who run afoul of such legislation can face fines of 4 percent of their annual revenues up to €20 million (about $23 million). Taken together, these occurrences had suddenly made data privacy a top concern for CIOs and CISOs.
Whose data is it, anyway?
As the fate of the now-bankrupt Cambridge Analytica illustrates, misusing consumer data can have serious consequences. The potential for such misuse then should be treated along the same lines as a potential data breach.
That’s where the similarities end though. A data breach is often the outcome of negligence (the Equifax breach occurred because the credit-reporting giant didn’t download a patch that had been available for weeks) or human error (phishing attacks and such). But improper use of data might be the result of corner-cutting in the marketing department or rampant data siloing. As some have argued, just because data exists in a digital form doesn’t mean it’s the purview of the CIO.
Careful who you partner with
Another lesson from the Cambridge Analytica debacle was the collateral damage it inflicted on Facebook. Yes, Facebook may have deserved what it got, but that incident is a warning to keep track of how your partners procure and handle consumer data.
It is also a warning to take stock of the data that you have obtained from partners. A recent Accenture study found that most consumers are OK with businesses using their data, but of those who were not OK, it was because the brand had data about the consumer that was not knowingly shared. In other words, if your company obtains consumer info off of a list that was purchsed, a significant population of these consumers will not be happy with your brand. In Europe, of course, your company could face serious fines through GDPR.
Consumers have a complex relationship with their data. On the one hand, there’s a natural tendency to keep personal details and preferences private. But consumers also realize that services like Google and Facebook aren’t really free and rely on such data to fuel advertising revenues.
Accenture’s data has found that 83 percent of consumers are willing to share their data if they receive a personalized experience. In the U.S., there’s on-going debate and mixed sentiment around regulation. An upcoming Supreme Court case, for instance, will decide whether the police were within their rights to use cellphone tower data to place a man at the scene of several armed robberies.
Murkier still is the data gleaned by smart speaker devices like Amazon Echo and Google Home. The former has been known to record personal conversations of users. Is that an invasion of privacy? Some would argue so, but others would say when you buy a device like that and install it in your home that you are agreeing to terms of service which nullifies a right to privacy.
Biometric security offers another case in point: Is it reasonable to ask an employee for their fingerprints or is that too an invasion of privacy? Does that same employee have an issue giving his fingerprints to Apple for Touch ID on his iPhone? Such ethical conundrums multiply as more and more devices enter the public realm.
What we’re finding in 2018 then is that none of us know exactly where the line is for personal data. But wherever we land on the issue, its businesses and their IT departments that will be on the front lines.
What can we do?
As a company executive, it is critical to understand how your business stores, shares, analyzes and protects its core private data, particularly that of customers and employees. Today, this is a very difficult task – despite the best of intentions, most companies do not know where all of their data is, or where it flows, particularly when third parties are involved in business processes.
The first order of business for companies is to locate this data, and then track where the data flows as part of its normal business process. Doing this manually is an impossible task. But machine learning enabled applications are being introduced into the market which can help make this problem more tractable.
As part of this process, data may need to be cleansed – some customer data may be under strict regulation, others may be stale, etc. Driven in particular by the financial penalties of GDPR, start-ups have begun to innovate with AI-based systems to protect, filter or even delete data based on business rules and regulatory frameworks.
We are entering very new territory where privacy will drive a new wave of security technology driven by AI. For the integrity of our personal data and the success of our businesses, let’s all hope that these new tools can keep pace with the challenges.
This article is published as part of the IDG Contributor Network. Want to Join?