Enterprise approaches to data privacy and governance need a refresh when it comes to AI because things can go wrong at scale, fast.

Lisa Morgan, Freelance Writer

April 22, 2019

9 Min Read
Deborah Adleman, EY

Emerging privacy laws and the increasing use of AI mean that companies need to rethink their approach to data use and protection, not only in terms of what they’re doing today but scenarios that may unfold in the future. If you can’t articulate your company’s data vision, or you can’t find someone in your organization who can explain what that data vision is, your company’s leadership has some work to do because they aren’t in a position to manage risks effectively.

Here’s why:

Companies have amassed lots of data, and they have to balance two opposing forces, which are data use and data protection. If the people in the organization who want to use data aren't talking to the people whose job it is to protect the data, there may not be appropriate guardrails in place to manage data-related risks effectively. Or the data-related risk management practices may be so strict that the company may be missing out on business opportunities. While the practical aspects are more complicated, companies need to balance their data use and data protection efforts.

Seizing data opportunities also may raise questions about the company’s overall purpose and direction (including its brand). Getting the right “voice” can be challenging since, historically, companies structured their data protection programs differently, either from within the legal function or risk function to IT.

"As organizations move from optimization and automation to innovation and beyond AI, they may want to ask different kinds of data-centric questions such as, ‘What are the boundaries for using data to perform analytics?’ ‘How are those boundaries impacted by AI?’ ‘Who are the stakeholders in these conversations?',” said Deborah Adleman, EY US & Americas Data Protection, Director Ernst & Young LLP. EY is a global professional services organization www.ey.com. “When you add AI on top of that and the implications of making decisions based on the data, it raises the importance of your data protection as well as ethics program and organization."

For example, your company likely provides your customers with appropriate privacy policies (notices) regarding how their data will be processed. You probably also have a data protection policy that’s been distributed to your employees reminding them how their data is protected while also reminding them of their obligations to protect data more broadly. The document also describes the approved uses for the data.

What if later you decide you want to run analytics on the data after anonymizing it? You might assume that would be OK, but perhaps it’s not. Have you considered whether your intended use potentially violates your company’s policies and applicable laws? The first prudent move would be to ask someone on the data protection team. You might also be required to talk to someone in the legal department.

If you’re simply running analytics, the results of which will identify methods to further protect the data, those you consult with may indicate the use case may be acceptable as-is. Alternatively, if you plan to use the data to train a machine learning algorithm because you want to provide better services to your customers, is this ok? Is this scenario covered by your company’s data protection policy? Check with someone who's been designated to assist with such matters.

How to balance data use and data protection

The rapidly changing regulatory environment requires companies to constantly rethink the details of their data protection policy. Complying with GDPR required considerable effort. Now, the California Consumer Privacy Act (CCPA) is looming. It’s slated to go into effect on January 1, 2020. At the present time, it’s not clear whether CCPA will include employee data, but even if it doesn't, the act would apply to employees who interact as consumers to the company. But what if Company A wanted to improve its intern program and so it hired an outside marketing company to pay former Company A interns for their opinions about the work experience at Company A. How would CCPA apply? Then, what if they wanted to use that data to build AI solutions to improve their intern programs? Are there new or different persons to consult with?

Most people are unaware of such nuances, which is why organizations need to put appropriate guardrails in place and just as importantly communicate those guardrails from the top of the organization.

Deborah_Adleman-EY.jpg

To navigate the increasingly choppy waters, the company’s data strategy group and the data protection office need to work together to figure out what is and is not possible when it comes to data use.

"People in the privacy and data protection community are more often being viewed as team builders, bringing together those various voices from the business (the requestor of the data use case) to risk, IT, and legal," said Adleman. "You need to understand what the purpose of the proposed data use is and all the ways you need to protect the data to determine whether and/or how the data can be used. If you can become a trusted advisor, people will know you won’t always say yes, but that your objective is to see if ‘how’ is possible. Of course, you've got to have a good foundation of data principles including data minimization, fit for purpose, data protection, and data security."

For example, if you want to use employee data as machine-learning training data to determine the likely tenure of a certain generation of employees, it may be possible if the data were anonymized. However, if the collection and use of that data offends applicable laws or your employee data protection policy, then the data protection team’s job is to say "no," which is pretty obvious. What may be less obvious, especially to people who are not well-versed in data protection, is that just because data can be used for one purpose it does not mean the data can be used for another purpose. Yes, it’s the same data, but different rules may apply to different scenarios.

To manage risks effectively, companies need a well-conceived policy and they need to operationalize that policy so employees are inclined to think and act in the best interests of the organization. People who want to use data should work with the data protection team to ensure that the intended use is acceptable. The data protection team should have an open-door policy that encourages discussion. In addition, the policy should be programmatically implemented in data-related tooling. Particularly when it comes to AI use cases, a newcomer to these discussions is the ethics organization.

Be forward-looking

The increased use of autonomous and intelligent systems requires people to think more about the potential scenarios of how data can be used.

"In the pre-AI world, you’d ask about the nature of the data and the business purpose for its use," said Adleman. "The next layer is imagining future scenarios that occur when you have the data you want. By educating people and teaching them how to think about the future impact (strengthening their understanding of the company’s ethical foundation), I think there’s a greater likelihood of avoiding unintended consequences."

Being forward-looking in a responsible way must be implemented from the top down and from the bottom up. Specifically, it’s the C-suite’s job to ensure the company's policy is implemented, including mechanisms for monitoring and enforcing policies. Employees should have full notice of the policy that states what’s acceptable, what isn’t, and what happens if the policy is violated. To do that effectively, management has to have a vision of where the company is headed with its data use and the scenarios that could unfold, good and bad.

[Deborah Adleman of EY will be presenting the session Can You Trust Your Autonomous and Intelligent Systems? at Interop 2019 on May 22 in Las Vegas.]

Employees need more than just a piece of paper or a collection of bits that's downloadable from a portal. Instead, the policy should be part of the employment agreement and the employee handbook. In addition, the policy should be supported by training that includes hypothetical scenarios so employees know how to think critically about the issue in the first place. It's also important to have an "800 number" or a person to call if in doubt about what to do.

Some executives try to force data scientists to misuse data in a manner that proves something other than what the data actually says, for instance. In some cases, the request is not a polite one; it’s a threat upon which the data scientist’s job depends. Not surprisingly, these data scientists might call an association to which they belong or an employment lawyer to figure out how they should handle the situation when there should be a go-to person at the company or acting on behalf of the company who can provide guidance without subjecting the data scientist to employment risks. In other words, companies should not retaliate against employees who are trying to do the right thing, ethically speaking.

"In the AI world, organizations will continue to rely on the strength of their systems of ethics, data protection governance and accountability," said Adleman. "There’s accountability at different levels. The first level is the company’s accountability to its stakeholders, shareholders and the public. Then, there’s the accountability of the leadership that sets the tone in the organization and confirms from the top down what people’s obligations are as well as management who carry that message from the core (the middle). Lastly, accountability includes the systems of monitoring and testing of the control framework."

The framework should include consistent monitoring, including self-learning AI systems to ensure they are not violating the company’s policies or applicable laws. That requires system transparency and complete traceability concerning data usage because ultimately the company, its leadership and the people involved might be held accountable if something goes wrong.

"AI forces us to think in a different way. You have to take a ‘possibilities’ or ‘non-human’ mindset and overlay that with the ethical mindset," said Adleman. "AI will enable you to do things you can’t do today. What it can’t do is give us an out. You can’t say the AI did this wrong or I developed this model and I didn’t know if I fed it a gargantuan amount of data it would do something that has harmed the rights and privileges of natural persons."

Bottom line

Data protection policies have to evolve with emerging laws, business practices and technologies, which means that companies have to consider how data can be used, how it is used and how it will be used in the future. While every scenario is not foreseeable, organizations can better manage risks by having policies and practices in place that contemplate the possibilities and also serve as easily-understandable guardrails that minimize the potential of unnecessary risks.

For more on ensuring data privacy for your organization and your customers, check out this new article on InformationWeek.

Data Privacy, Transparency Get New Weapon

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights