10-minute read

Quick summary: By leveraging best practices in AI and privacy, businesses can maintain compliance with data privacy laws and build customer trust while also realizing the benefits of artificial intelligence and machine learning.

Contents

Artificial intelligence privacy issues

How does AI collect personal data?

Does AI create bias?

The semantics of “data privacy” and “artificial intelligence”

What is personal data, anyway?

… and while we’re at it, what is artificial intelligence?

Current state of AI and data privacy

Privacy laws and artificial intelligence

Blueprint for an AI Bill of Rights

AI and customer sentiment

Best practices for aligning AI and data privacy

Shaping the future of AI and data privacy

Hard to believe it’s been nearly five years since GDPR went into effect, kicking off a frenzy of data privacy activity across industries. (Remember getting allllll those emails from your favorite online vendors back in 2018?) Since then, the protection of personal data has taken its rightful place as an essential business priority, and while businesses have gotten better at it, new challenges are always complicating their mission—not the least of which is the advancement of artificial intelligence (AI) and machine learning into mainstream business practices.

Fortunately, best practices in aligning artificial intelligence and privacy have begun to emerge, enabling organizations to maintain regulatory compliance—and customer trust—while also enjoying the many benefits that AI technologies have to offer.

Artificial intelligence privacy issues

Certainly any business process that involves personal data could be a risk point for violating data privacy laws and customer trust, so why do AI-driven processes merit special attention?

To answer this question, it helps to revisit the reason why data privacy laws exist in the first place: to give individuals control over how organizations gather and use their personal data.

How does AI collect personal data?

It’s a fairly straightforward matter when, for example, a consumer completes a purchase and the vendor collects their name, address, and credit card number. The consumer has willingly shared this information for the purpose of making a purchase and now, depending on their residency and/or geographic location, has certain legal rights regarding what happens to it.

 

When AI enters the picture, the focus is considerably less clear. For example, if an AI-enabled security camera detects a face and determines that it belongs to a certain individual, the chances of the identified person even being aware that this data is being collected are slim—let alone knowing who collected it or how it is being used. If the subject is covered by a data privacy law such as GDPR or CCPA, exercising the rights granted by the applicable legislation (right to be informed, right to be forgotten, right to restrict processing, etc.) becomes problematic to say the least.

Does AI create bias?

Another data risk with AI is the potential for unintentional biases that deliver unfair outcomes based on personal data. AI systems “learn” how to make decisions based on training data sets that were created by humans and thus can incorporate human biases that wind up becoming “baked in” to the system. Amazon, for example, had to discontinue use of a hiring algorithm when they discovered that it discriminated unfairly against women.

Organizations and businesses that leverage AI with regard to personal data, therefore, have added responsibilities if they are to ensure continued alignment with data privacy laws and maintain the trust of their customers.

The semantics of “data privacy” and “artificial intelligence”

The first challenge in reconciling privacy and artificial intelligence is one of semantics.

What is personal data, anyway?

Since GDPR went into effect, a host of other countries and U.S. states have passed their own data privacy legislation, each with a slightly different perspective on what constitutes personal data. GDPR, for example, views personal data as information related to a natural person (“Data Subject”) that can be used to directly or indirectly identify the person, highlighting practical examples such as name, physical address, ID card number, etc. The California Consumer Privacy Act (CCPA) added components such as biometric data, internet activity records, and geolocation, among others. The California Privacy Rights Act (CPRA), which went into effect January 1, 2023, introduced specific categories of “sensitive data”—including Social Security numbers and genetic data—that merit special considerations on top of those required for other personal information.

So a business looking to achieve and maintain readiness for current and future data privacy laws must first address the question “What is it that we’re protecting?”

When we work with clients on data privacy, we look at all their data—a practice that goes back many years before GDPR was on the scene. We help them understand what data they have and how they’re processing it, and we eliminate weak points such as data duplicates. Then we can create a holistic view of all the ways they’re collecting and processing personal data. With regard to how we define “personal data,” we look at the big picture and align our working definition with that of the most stringent regulations, regardless of which one(s) apply to the company at the time.

… and while we’re at it, what is artificial intelligence?

As AI becomes further and further entrenched in mainstream business practices—with drag-and-drop tools now available and affordable for nearly any business and any use case—the line between artificial intelligence and traditional automation is becoming more obscure.

Today there are as many definitions of artificial intelligence as there are sources defining it. Most incorporate some variation of “machines performing functions traditionally performed by humans,” but how does this translate into practical, day-to-day operations?

Here again, businesses must address a complex question: “Exactly what is it that we’re protecting personal data from?”

Our clients utilize multiple approaches to automation, both with and without artificial intelligence. When a process involves decisions such as profiling being made without human inputs, we view that as an AI-driven process and take additional measures to ensure that personal data is being protected.

The current state of AI and privacy

Privacy laws and artificial intelligence

While neither GDPR nor CCPA addresses AI by name, we do see high-level coverage of “automated individual decision making” which is being interpreted to include AI and machine learning. Article 22 of GDPR, for example, states “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” We are also watching several proposed state and federal laws that aim to address the issue of AI and privacy more directly.

 

Blueprint for an AI Bill of Rights

One government initiative worth noting is the White House’s recent release of a “Blueprint for an AI Bill of Rights,” which aims to “provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.” Two key tenets of the document focus on the concepts of transparency (“this is what we are doing”) and explainability (“this is how we do it”).

 

In addition, the Data Privacy segment of the blueprint puts forth four general “expectations” for automated systems:

 Protect privacy by design and default.

 Protect the public from unchecked surveillance.

 Provide the public with mechanisms for appropriate and meaningful consent, access, and control over their data.

 Demonstrate that data privacy and user control are protected.

 

While this document is in no way legally binding, it at least affirms that the issue of AI and privacy is receiving serious consideration at the U.S. federal level.

AI and customer sentiment

Another important driver for organizations to align their use of AI with personal data protection lies in the hearts and minds of consumers. Past surveys have uncovered a strong connection between how an organization uses personal data and the extent to which customers trust it, and that inclination appears to be intensifying now that AI has entered the picture.

 

Although the average American may have limited understanding of how the technology works, it appears that most know enough to have data concerns with AI. In a recent survey by Cisco, 60 percent of consumers expressed apprehension over how AI is using private information, and 65 percent said they have already lost trust in some organizations due to their use of AI technology.

Best practices for aligning AI and data privacy

While we are just beginning to understand the full scope of AI’s impact on privacy issues, organizations have implemented some practices that are proving effective in supporting their compliance activities as well as the trust of their customers.

 

1. Understand your use of AI

To ensure alignment of AI and privacy, you first need a clear picture of where and how AI and machine learning are being used in your organization. Sit down with your internal teams and find out exactly what personal data they’re using in AI applications, how and why they use it, and what happens when they’re done with it.

 

2. Practice transparency and ensure explainability

Your customers should understand how AI-based platforms are using their personal data and how AI-generated outcomes could affect or have affected them (transparency). The organization should also be ready, willing, and able to explain specific decisions or predictions made by AI systems in terms that the average person can understand (explainability).

 

3. Incorporate ethical testing

When developer teams conduct routine testing of an AI-driven platform, incorporating ethical testing into the process can help ensure that the resulting product delivers fair, explainable decisions that are free from bias and inequity.

 

4. De-identify personal data

Strategies such as aggregation, pseudonymization, and anonymization enable businesses to separate personal data from the identities of individuals. Implement the approach—or combination of approaches—that best matches for your data and your systems to protect the identities of your customers.

5. Minimize use of personal data in AI algorithms

The less personal data your AI platforms collect and use, the lower the chances of unintentionally violating data privacy laws or compromising a customer’s identity. The advice we’ve offered our clients from the beginning still applies with AI in the picture:

• Collect as little data as necessary (after making sure you have the right to collect it).

• Grant access to as few people/applications as possible.

• Only hold it for as long as you need it.

 

6. Establish data privacy processes specific to AI systems

Because AI-based applications rely on complex algorithms for data processing and decision making, applying your organization’s data privacy policies can be challenging. Work with your developer teams to make sure applicable policies are followed at each step of AI-driven processes. For example, if an individual exercises their “right to be forgotten,” there should be specific procedures in place to remove their data from AI-driven processes. And under Article 22 of GDPR, data subjects have the right to request “human intervention” in contesting decisions based solely on automated processing. Whether a business is covered by GDPR or not, it’s a good idea to have processes in place to accommodate these requests.

 

7. Do a privacy assessment specifically for AI functions

A formal Data Protection Impact Assessment (DPIA) is currently only required by GDPR for “high-risk” processes. Since AI-based decision making excludes the human factor, we view it as inherently high risk and incorporate these processes in our clients’ GDPR readiness strategies. Even for organizations not covered by GDPR, conducting a privacy assessment for every process involving personal data is always a wise move, especially when artificial intelligence is involved.

 

8. Establish accountability and internal governance

Let’s face it: even when they’re designed to meet the highest standards of data privacy, AI systems don’t always perform as they should. When that happens, it’s not always clear who should be held accountable. Verify that your organization has an accountability structure in place to ensure that potential data privacy violations are identified and addressed as promptly as possible—and that measures are put in place to prevent the error from reoccurring.

 

9. Get involved in legislative and regulatory processes

If your organization has the opportunity to get involved in the writing of new regulations or legislation—or the amendment of existing ones—the time and effort will be well spent. Digital giants such as Uber and Google frequently take an active role in the legislative process to speak for business interests, but in most cases any organization has the option of participating.

 

Shaping the future of AI and data privacy

While artificial intelligence and machine learning are certainly nothing new, their growing prevalence across business processes is complicating the mission to protect personal data. By following proven best practices—and keeping an open ear for future strategies—organizations can continue reaping the benefits of AI-driven applications while preserving the trust of their customers as well as their compliance status with respect to data privacy laws.

Person reading papers in front of laptop screen

Digital transformation done right

We create powerful custom tools, optimize packaged software, and provide trusted guidance to enable your teams and deliver business value that lasts.

  • RPA
  • Virtual assitants
  • Solution engineering
  • Cloud solutions
  • Enterprise architecure
  • Digital strategy
Paul Lee

Sarah Davis is a Data Privacy Manager in Logic20/20’s Strategy & Operations practice, with deep experience in GDPR and CCPA compliance strategies as well as data privacy assessments.

Author