Video: GDPR: US companies with no physical presence in EU still need to comply
GDPR is finally here, and it’s here to stay. The 25th of May is not the finish line; it’s really only the beginning. GDPR’s impact will be felt across the board and across the world. ZDNet discussed with a wide range of experts, aiming to decipher the real-life applicability and impact of GDPR.
In the first part, we looked into the reasons why most organizations are not ready to deal with GDPR, the chances of US-based organizations being scrutinized under GDPR, and the process and expectations for individuals wishing to exercise their rights under GDPR.
We now continue with GDPR audits on premise and in the cloud, its impact on innovation, machine learning, and interpretable artificial intelligence, as well as how GDPR goes well across borders and organizations and into the future.
Auditing data on premise and in the cloud
So, supposing we get over the unclear and complex process of granting a regulator access to audit an organization’s data, how would that actually work in practice? Would records have to be delivered to the auditors remotely, or would auditors have to visit on-site? What about data stored in the cloud?
Read also: GDPR: It’s here, so what happens now?
Julia Jessen, manager at Accenture within the financial services industry, and Henk-Jelle Reitsma, senior manager of finance and risk for the Benelux practice of Accenture, have a lot of experience working with GDPR and are quite upfront about this:
“Truth is: we don’t know. We would expect a mix of remote and on-site supervision, depending on the situation. Remote by default, on-site in case of (serious and/or multiple) complaints. On-site every five years, remote in the intermediate years (when no signals/complaints/notifications)?
Given GDPR’s scope, we expect a supervision by exception, reacting on notifications and/or multiple serious complaints, much like other aspects of law that apply to all of us. The police does not check all households every day to check whether somebody has been killed. They react to missing persons and then start a case.
Data stored in the cloud will be integral part of any supervisory investigation/scope, though for certain industries, such as Financial Services, data stored in the cloud is already under regulator’s scrutiny both on site as well as remotely.”
Lorena Jaume-Palasí, executive director of NGO Algorithm Watch, agrees on the legal aspect, but has a different perspective:
“In civil law human interaction is not seen as a risk. Rules are applied after harm is done and a claimant can proof that harm was done. GDPR is based on the precautionary principle and sees the processing of personal data as a general risk.
This is highly controversial, because dogmatically it does not understand the difference between public personal data processing and private personal data processing. Moreover harm to society can be caused both with personal and non-personal data.”
French Caldwell is a former advisor to the White House on cybersecurity. Caldwell, who currently serves as chief evangelist for MetricStream, providing solutions to simplify governance, risk, and compliance for enterprises, says it’s complicated:
“Certainly supervisory authorities would retain the option for records to be delivered electronically and to do on-site audits. As fοr data in the cloud, it’s complicated. Storing data outside the EU is allowed so long as the right controls and protections are in place, but many European companies already insist that data in the cloud be stored in EU-located servers.”
Dimitri Sirota is the CEO and co-founder of BigID, a company that uses machine learning to automate compliance for regulations like GDPR, and he joins the choir singing the uncertainty tune:
“Not all the Data Protection Authorities (DPAs) have given guidance, but its my understanding the requests for information can be remote. I don’t believe all the DPAs have strategies in place for how they will audit conformance given that some of what GDPR asks — like subject access — are not as straightforward to verify.”
GDPR’s impact on innovation, machine learning, and interpretable artificial intelligence
Most of the talk about GDPR has been on personal data and consent. However there are some provisions in GDPR that are usually overlooked, namely on data portability and algorithm interpretability. Could GDPR help technological advance on those fronts?
Sirota says that GDPR has led to significant innovation in privacy, and BigID is a testament to that:
“Before GDPR, tools didn’t exist for finding data belonging to an individual. Similarly while consent logging existed, there was no effective way to track consent across apps and systems using different consent capture systems. Same deal with automating records of data processing or identifying affected individuals in breach data within a 72 hour window.”
Andrew Burt, chief privacy officer and legal engineer at Immuta, an information governance platform that integrates data sources for data scientists, says that GDPR’s provisions on algorithmic interpretability, while admittedly vague at this point, have helped spur a lot of new interest in the field of interpretable machine learning, or “explainable AI.”
“The regulation is frequently cited as the first major law mandating some form of explainability or interpretability in machine learning models. So the short answer is, yes, I believe the regulation can help spur some advances here, merely by dialing up the pressure on these issues, which it already has.
The flip side, though, is that the regulation may also hinder some implementations of cutting-edge technology, depending on how regulators in the EU interpret and enforce some of the more ambiguous provisions. I think a lot of us are waiting to see how this regulatory environment shapes up.”
Jessen and Reitsma say that in their experience organizations might be most vocal about personal data and consent, but they also focus on subject access rights, data retention and deletion, managed information security, etc. They note that GDPR will have a big impact across all these aspects of dealing with personal data:
“In Financial Services, the focus is not necessarily on consent, as especially in this industry there are many grounds on which customer’s data can be processed. We think the area that organizations are most worried about is how to keep a clear and structured oversight of data flows — all data captured and processed.”
Jaume-Palasí notes that consent is only one possibility, and it entails many risks, since consent can be retracted without any justification, and adds that most organizations will probably try to legitimize their data processing in order to avoid having to delete data. In general, she says, GDPR is not geared toward technological advance, and Jaume-Palasí gives some examples:
“Data portability is an unclear point since most data cannot be attributed to a single person. Take a posting in Facebook: What does the person who wrote the post own? The whole thread with comments and likes from friends? Or only the post? What if the post has a screenshot of a posting or comment from someone else? And how about pictures with more than one person?
Algorithm interpretability is another issue: Fully automated processes are generally forbidden, with only a few exceptions. But only fully automated processes have to deliver a logical explanation of the process.
Meanwhile other semi-automated processes with a human in the loop do not have to impart any explanation of the logic of the formula used, even though those processes may assist in making decisions that are highly relevant for the everyday life of citizens, such as credit scoring.”
Caldwell also notes that despite some nods to advanced analytics and social media, GDPR is still based in large part on the view that personal data is structured and readable by humans:
“I see nothing in the law that deals specifically with the algorithmic behavioral models that are created from data. If I create a good model of your behavior, then the data that I based the model on is no longer relevant. Rather, I can anticipate how you will think and react”.
GDPR across borders and organizations and into the future
And what about GDPR’s impact? Is legislation in the rest of the world likely to follow in GDPR’s footsteps? Some organizations, most notably Microsoft, have promised to offer GDPR-like privacy to all their users, not just EU residents, possibly as a way to simplify their internal processes.
Caldwell notes that US companies find it easier to maintain one set of privacy policies worldwide, and they have updated their policies to be compliant with GDPR:
“If US companies fail to adhere to their own policies, then EU data subjects could complain to the Federal Trade Commission. The FTC has levied the highest fines in the world against companies that fail to adhere to their own privacy policies. GDPR also calls for new certification standards.
A future scenario is that the FTC and EU authorities could agree on a joint certification standard for US companies similar to the Asia Pacific Economic Cooperation privacy standard, which the FTC enforces in the US. A GDPR-based EU-US certification standard would help to reduce uncertainty for US-based companies, and also provide a US-based enforcement mechanism.
Cross-border regulatory migration is common, and privacy is a hot topic with publics around the world. I expect other countries, states, and provinces will wait and see what works with GDPR and what doesn’t.
As an example of cross-border regulatory migration, privacy by design is a fundamental principle under GDPR, but the regulation does not define it at all. In the absence of that definition, the UK ICO has adopted Ontario’s privacy by design principles — take a look at the last section of this page.”
Sirota also points out that we already see cross-over impact from the legislation, and notes that “with over 500 million citizens, Europe has clout, and as we’ve seen privacy concerns are not limited to one or another jurisdiction.”
Jaume-Palasí, however, points to an example to draw attention to the fact that a legal framework in and by itself is not a panacea:
“South Korea has adopted the German Data Protection law. Until two years ago, the former Prime Minister used the documentation duties to establish a fine granular surveillance apparatus.
I would be very cautious with the export of regulation into non-democratic countries. This is another example showing why in the end the most relevant part is the legal culture and interpretation of the local regulator.”
Jessen and Reitsma believe that “we are not able to handle EU and non-EU subjects separately” is not a response that will be accepted:
“If you can’t you should work hard to change that. These kinds of tougher nuts are likely to be part of the backlog post May 25. GDPR shows us a clear direction for legislation globally: A generic approach across the business model, strategy, and operations.
Putting the subject, the one whom the personal data primarily concerns, first and center. Although other global legislation might and will differ on details and scope these elements and the general direction is likely to be echoed”.
Burt says that, for large companies, it may be easier to just default to GDPR-like policies for everyone:
“There’s going to be a range of reactions through the world. Given how large the EU market is, though, many companies will simply have no choice but to revamp their data governance practices from the ground up — and for some of our big thinking data-driven customers, that’s exactly what we’re seeing at Immuta.
Enterprises that are most invested in their data science programs — that are the most serious about becoming algorithm driven — find a lot of opportunity in GDPR. Because its mandates actually translate into best practices for data science at scale.
Organizations need to understand what data they have, and how they’re using it, in order to maximize its value. From a bigger picture, that’s exactly what the GDPR is all about.”